input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches.
The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks.
To mitigate this concern, we propose an active multitask learning algorithm that achieves knowledge transfer between tasks.
The approach forms a so-called committee for each task that jointly makes decisions and directly shares data across similar tasks.
Our approach reduces the number of queries needed during training while maintaining high accuracy on test data.
Empirical results on benchmark datasets show significant improvements on both accuracy and number of query requests.
A triumph of machine learning is the ability to predict with high accuracy.
However, for the dominant paradigm, which is supervised learning, the main bottleneck is the need to annotate data, namely, to obtain labeled training examples.
The problem becomes more pronounced in applications and systems which require a high level of personalization, such as music recommenders, spam filters, etc.
Several thousand labeled emails are usually sufficient for training a good spam filter for a particular user.
However, in real world email systems, the number of registered users is potentially in the millions, and it might not be feasible to learn a highly personalized spam filter for each of them by getting several thousand labeled data points for each user.One method to relieve the need of the prohibitively large amount of labeled data is to leverage the relationship between the tasks, especially by transferring relevant knowledge from information-rich tasks to information-poor ones, which is called multitask learning in the literature.
We consider multitask learning in an online setting where the learner sees the data sequentially, which is more practical in real world applications.
In this setting, the learner receives an example at each time round, along with its task identifier, and then predicts its true label.
Afterwards, the learner queries the true label and updates the model(s) accordingly.The online multitask setting has received increasing attention in the machine learning community in recent years BID6 BID0 BID7 BID9 BID4 BID13 BID11 .
However, they make the assumption that the true label is readily available to be queried, which is impractical in many applications.
Also, querying blindly can be inefficient when annotation is costly.Active learning further reduces the work of the annotator by selectively requesting true labels from the oracles.
Most approaches in active learning for sequential and streambased problems adopt a measure of uncertainty / confidence of the learner in the current example BID5 BID3 BID12 BID8 BID1 .The
recent work by BID10 combines active learning with online multitask learning using peers or related tasks. When
the classifier of the current task is not confident, it first queries its similar tasks before requesting a true label from the oracle, incurring a lower cost. Their
learner gives priority to the current task by always checking its confidence first. In the
case when the current task is confident, the opinions of its peers are ignored. This paper
proposes an active multitask learning framework which is more humble, in a sense that both the current task and its peers' predictions are considered simultaneously using a weighted sum. We have a
committee which makes joint decisions for each task. In addition
, after the true label of a training sample is obtained, this sample is shared directly to similar tasks, which makes training more efficient.
We propose a new active multitask learning algorithm that encourages more knowledge transfer among tasks compared to the state-of-the-art models, by using joint decision / prediction and directly sharing training examples with true labels among similar tasks.
Our proposed methods achieve both higher accuracy and lower number of queries on three benchmark datasets for multitask learning problems.
Future work includes theoretical analysis of the error bound and comparison with those of the baseline models.
Another interesting direction is to handle unbalanced task data.
In other words, one task has much more / less training data than the others. | We propose an active multitask learning algorithm that achieves knowledge transfer between tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:619 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR).
Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance.
However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well.
The few neural, end-to-end models that have been proposed are trained almost completely from scratch.
In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model.
Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train.
On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin.
The extraction of named entities (named entity recognition, NER) and their semantic relations (relation extraction, RE) are key tasks in information extraction and retrieval (IE & IR) .
Given a sequence of text (usually a sentence), the objective is to identify both the named entities and the relations between them.
This information is useful in a variety of NLP tasks such as question answering, knowledge base population, and semantic search (Jiang, 2012) .
In the biomedical domain, NER and RE facilitate large-scale biomedical data analysis, such as network biology (Zhou et al., 2014) , gene prioritization (Aerts et al., 2006) , drug repositioning (Wang & Zhang, 2013) and the creation of curated databases (Li et al., 2015) .
In the clinical domain, NER and RE can aid in disease and treatment prediction, readmission prediction, de-identification, and patient cohort identification (Miotto et al., 2017) .
Most commonly, the tasks of NER and RE are approached as a pipeline, with NER preceding RE.
There are two main drawbacks to this approach: (1) Pipeline systems are prone to error propagation between the NER and RE systems.
(2) One task is not able to exploit useful information from the other (e.g. the type of relation identified by the RE system may be useful to the NER system for determining the type of entities involved in the relation, and vice versa).
More recently, joint models that simultaneously learn to extract entities and relations have been proposed, alleviating the aforementioned issues and achieving state-of-the-art performance (Miwa & Sasaki, 2014; Miwa & Bansal, 2016; Gupta et al., 2016; Li et al., 2016; Adel & Schütze, 2017; Bekoulis et al., 2018a; b; Nguyen & Verspoor, 2019; Li et al., 2019) .
Many of the proposed joint models for entity and relation extraction rely heavily on external natural language processing (NLP) tools such as dependency parsers.
For instance, Miwa & Bansal (2016) propose a recurrent neural network (RNN)-based joint model that uses a bidirectional long-short term memory network (BiLSTM) to model the entities and a tree-LSTM to model the relations between entities; Li et al. (2017) propose a similar model for biomedical text.
The tree-LSTM uses dependency tree information extracted using an external dependency parser to model relations between entities.
The use of these external NLP tools limits the effectiveness of a model to domains (e.g. news) where those NLP tools perform well.
As a remedy to this problem, Bekoulis et al. (2018a) proposes a neural, end-to-end system that jointly learns to extract entities and relations without relying on external NLP tools.
In Bekoulis et al. (2018b) , they augment this model with adversarial training.
Nguyen & Verspoor (2019) propose a different, albeit similar end-to-end neural model which makes use of deep biaffine attention (Dozat & Manning, 2016) .
Li et al. (2019) approach the problem with multi-turn question answering, posing templated queries to a BERT-based QA model (Devlin et al., 2018) whose answers constitute extracted entities and their relations and achieve state-of-the-art results on three popular benchmark datasets.
While demonstrating strong performance, end-to-end systems like Bekoulis et al. (2018a; b) and Nguyen & Verspoor (2019) suffer from two main drawbacks.
The first is that most of the models parameters are trained from scratch.
For large datasets, this can lead to long training times.
For small datasets, which are common in the biomedical and clinical domains where it is particularly challenging to acquire labelled data, this can lead to poor performance and/or overfitting.
The second is that these systems typically contain RNNs, which are sequential in nature and cannot be parallelized within training examples.
The multi-pass QA model proposed in Li et al. (2019) alleviates these issues by incorporating a pre-trained language model, BERT (Devlin et al., 2018) , which eschews recurrence for self-attention.
The main limitation of their approach is that it relies on handcrafted question templates to achieve maximum performance.
This may become a limiting factor where domain expertise is required to craft such questions (e.g., for biomedical or clinical corpora).
Additionally, one has to create a question template for each entity and relation type of interest.
In this study, we propose an end-to-end model for joint NER and RE which addresses all of these issues.
Similar to past work, our model can be viewed as a mixture of a NER module and a RE module (Figure 1 ).
Unlike most previous works, we include a pre-trained, transformer-based language model, specifically BERT (Devlin et al., 2018) , which achieved state-of-the-art performance across many NLP tasks.
The weights of the BERT model are fine-tuned during training, and the entire model is trained in an end-to-end fashion.
Our main contributions are as follows: (1) Our solution is truly end-to-end, relying on no handcrafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers).
(2) Our model is fast to train (e.g. under 10 minutes on a single GPU for the CoNLL04 corpus), as most of its parameters are pre-trained and we avoid recurrence.
(3) We match or exceed state-of-the-art performance for joint NER and RE on 5 datasets across 3 domains.
Figure 1 illustrates the architecture of our approach.
Our model is composed of an NER module and an RE module.
The NER module is identical to the one proposed by Devlin et al. (2018) .
For a given input sequence s of N word tokens w 1 , w 2 , .
. ., w N , the pre-trained BERT BASE model first produces a sequence of vectors, x
In this paper, we introduced an end-to-end model for entity and relation extraction.
Our key contributions are: (1) No reliance on any hand-crafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers).
(2) Integration of a pre-trained, transformer-based language model.
(3) State-of-the-art performance on 5 datasets across 3 domains.
Furthermore, our model is inherently modular.
One can easily initialize the language model with pre-trained weights better suited for a domain of interest (e.g. BioBERT for biomedical corpora) or swap BERT for a comparable language model (e.g. XLNet (Yang et al., 2019) ).
Finally, because of (2), our model is fast to train, converging in approximately 1 hour or less on a single GPU for all datasets used in this study.
Our model out-performed previous state-of-the-art performance on ADE by the largest margin (6.53%).
While exciting, we believe this corpus was particularly easy to learn.
The majority of sentences (∼68%) are annotated for two entities (drug and adverse effect, and one relation (adverse drug event).
Ostensibly, a model should be able to exploit this pattern to get near-perfect performance on the majority of sentences in the corpus.
As a test, we ran our model again, this time using ground-truth entities in the RE module (as opposed to predicted entities) and found that the model very quickly reached almost perfect performance for RE on the test set (∼98%).
As such, high performance on the ADE corpus is not likely to transfer to real-world scenarios involving the large-scale annotation of diverse biomedical articles.
In our experiments, we consider only intra-sentence relations.
However, the multiple entities within a document generally exhibit complex, inter-sentence relations.
Our model is not currently capable of extracting such inter-sentence relations and therefore our restriction to intra-sentence relations will limit its usefulness for certain downstream tasks, such as knowledge base creation.
We also ignore the problem of nested entities, which are common in biomedical corpora.
In the future, we would like to extend our model to handle both nested entities and inter-sentence relations.
Finally, given that multilingual, pre-trained weights for BERT exist, we would also expect our model's performance to hold across multiple languages.
We leave this question to future work. | A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:62 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online.
We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives.
We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms.
Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage.
Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20).
While the proposed approach can successfully facilitate pre-screening of photographs shared online, further research is needed to improve model generalization.
We observed that the fine-tuning procedure tends bias the DCN/FAN models towards the secondary image dataset, in our case the native camera output (NCO).
The baseline DCN was pre-trained on mixed natural images (MNI) with extensive augmentation, leading to competitive results on all test sets.
However, fine-tuning was performed on NCO only.
Characteristic pixel correlations, e.g., due to color interpolation, bias the codec and lead to occasional artifacts in MNIs (mostly in the clic test set; see Appendix B), and deterioration of the rate-distortion trade-off.
The problem is present regardless of λ c , which suggests issues with the fine-tuning protocol (data diversity) and not the forensic optimization objective.
We ran additional experiments by skipping photo acquisition and fine-tuning directly on MNI from the original training set (subset of 2,500 RGB images).
We observed the same behavior (see Appendix C), and the optimized codec was artifact-free on all test sets.
(Although, due to a smaller training set, the model loses some of its performance; cf. MNI results in Fig. A.6 .)
However, the FANs generalized well only to clic and kodak images.
The originally trained FANs generalized reasonably well to different NCO images (including images from other 3 cameras) but not to clic or kodak.
This confirms that existing forensics models are sensitive to data distribution, and that further work will be needed to establish more universal training protocols (see detailed discussion in Appendix D).
Short fine-tuning is known to help (Cozzolino et al., 2018) , and we leave this aspect for future work.
We are also planning to explore new transfer learning protocols (Li & Hoiem, 2017) .
Generalization should also consider other forensic tasks.
We optimized for manipulation detection, which serves as a building block for more complex problems, like processing history analysis or tampering localization (Korus, 2017; Mayer & Stamm, 2019; Wu et al., 2019; Marra et al., 2019a) .
However, additional pre-screening may also be needed, e.g., analysis of sensor fingerprints (Chen et al., 2008) , or identification of computer graphics or synthetic content (Marra et al., 2019b) .
Our study shows that lossy image codecs can be explicitly optimized to retain subtle low-level traces that are useful for photo manipulation detection.
Interestingly, simple inclusion of high frequencies in the signal is insufficient, and the models learns more complex frequency attenuation/amplification patterns.
This allows for reliable authentication even at very low bit-rates, where standard JPEG compression is no longer practical, e.g., at bit-rates around 0.4 bpp where our DCN codec with lowquality settings improved manipulation detection accuracy from 37% to 86%.
We believe the proposed approach is particularly valuable for online media platforms (e.g., Truepic, or Facebook), who need to pre-screen content upon reception, but need to aggressively optimize bandwidth/storage.
The standard soft quantization with a Gaussian kernel (Mentzer et al., 2018) works well for rounding to arbitrary integers, but leads to numerical issues for smaller codebooks.
Values significantly exceeding codebook endpoints have zero affinity to any of the entries, and collapse to the mean (i.e., ≈ 0 in our case; Fig. A.1a) .
Such issues can be addressed by increasing numerical precision, sacrificing accuracy (due to larger kernel bandwidth), or adding explicit conditional statements in the code.
The latter approach is inelegant and cumbersome in graph-based machine learning frameworks like Tensorflow.
We used a t-Student kernel instead and increased precision of the computation to 64-bits.
This doesn't solve the problem entirely, but successfully eliminated all issues that we came across in our experiments, and further improved our entropy estimation accuracy.
Fig.
A.2 shows entropy estimation error for Laplace-distributed random values, and different hyper-parameters of the kernels.
We observed the best results for a t-Student kernel with 50 degrees of freedom and bandwidth γ = 25 (marked in red).
This kernel is used in all subsequent experiments.
We experimented with different codebooks and entropy regularization strengths.
Fig.
A.3a shows how the quantized latent representation (QLR) changes with the size of the codebook.
The figures also compare the actual histogram with its soft estimate (equation 6).
We observed that the binary codebook is sub-optimal and significantly limits the achievable image quality, especially as the number of feature channels grows.
Adding more entries steadily improved quality and the codebook with M = 32 entires (values from -15 to 16) seemed to be the point of diminishing returns.
Our entropy-based regularization turned out to be very effective at shaping the QLR (Fig. A.3b ) and dispensed with the need to use other normalization techniques (e.g., GDN).
We used only a single scalar multiplication factor responsible for scaling the distribution.
All baseline and finetuned models use λ H = 250 (last column).
Fig.
A.4 visually compares the QLRs of our baseline low-quality codec (16 feature channels) with weak and strong regularization. | We learn an efficient lossy image codec that can be optimized to facilitate reliable photo manipulation detection at fractional cost in payload/quality and even at low bitrates. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:620 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recurrent Neural Networks have long been the dominating choice for sequence modeling.
However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.
Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.
Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks.
Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts.
In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.
The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings.
We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.
Recurrent Neural Networks (RNNs) especially its variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have achieved great success in a wide range of sequence learning tasks including language modeling, speech recognition, recommendation, etc (Mikolov et al., 2010; Sundermeyer et al., 2012; Graves & Jaitly, 2014; Hinton et al., 2012; Hidasi et al., 2015) .
Despite their success, however, the recurrent structure is often troubled by two notorious issues.
First, it easily suffers from gradient vanishing and exploding problems, which largely limits their ability to learn very long-term dependencies (Pascanu et al., 2013) .
Second, the sequential nature of both forward and backward passes makes it extremely difficult, if not impossible, to parallelize the computation, which dramatically increases the time complexity in both training and testing procedure.
Therefore, many recently developed sequence learning models have completely jettisoned the recurrent structure and only rely on convolution operation or attention mechanism that are easy to parallelize and allow the information flow at an arbitrary length.
Two representative models that have drawn great attention are Temporal Convolution Networks(TCN) (Bai et al., 2018) and Transformer (Vaswani et al., 2017) .
In a variety of sequence learning tasks, they have demonstrated comparable or even better performance than that of RNNs (Gehring et al., 2017; Bai et al., 2018; Devlin et al., 2018) .
The remarkable performance achieved by such models largely comes from their ability to capture long-term dependencies in sequences.
In particular, the multi-head attention mechanism in Transformer allows every position to be directly connected to any other positions in a sequence.
Thus, the information can flow across positions without any intermediate loss.
Nevertheless, there are two issues that can harm the effectiveness of multi-head attention mechanism for sequence learning.
The first comes from the loss of sequential information of positions as it treats every position identically.
To mitigate this problem, Transformer introduces position embeddings, whose effects, The illustration of one layer of R-Transformer.
There are three different networks that are arranged hierarchically.
In particular, the lower-level is localRNNs that process positions in a local window sequentially (This figure shows an example of local window of size 3); The middle-level is multi-head attention networks which capture the global long-term dependencies; The upper-level is Position-wise feedforward networks that conduct non-linear feature transformation.
These three networks are connected by a residual and layer normalization operation.
The circles with dash line are the paddings of the input sequence however, have been shown to be limited (Dehghani et al., 2018; Al-Rfou et al., 2018) .
In addition, it requires considerable amount of efforts to design more effective position embeddings or different ways to incorporate them in the learning process (Dai et al., 2019) .
Second, while multi-head attention mechanism is able to learn the global dependencies, we argue that it ignores the local structures that are inherently important in sequences such as natural languages.
Even with the help of position embeddings, the signals at local positions can still be very weak as the number of other positions is significantly more.
To address the aforementioned limitations of the standard Transformer, in this paper, we propose a novel sequence learning model, termed as R-Transformer.
It is a multi-layer architecture built on RNNs and the standard Transformer, and enjoys the advantages of both worlds while naturally avoids their respective drawbacks.
More specifically, before computing global dependencies of positions with the multi-head attention mechanism, we firstly refine the representation of each position such that the sequential and local information within its neighborhood can be compressed in the representation.
To do this, we introduce a local recurrent neural network, referred to as LocalRNN, to process signals within a local window ending at a given position.
In addition, the LocalRNN operates on local windows of all the positions identically and independently and produces a latent representation for each of them.
In this way, the locality in the sequence is explicitly captured.
In addition, as the local window is sliding along the sequence one position by one position, the global sequential information is also incorporated.
More importantly, because the localRNN is only applied to local windows, the aforementioned two drawbacks of RNNs can be naturally mitigated.
We evaluate the effectiveness of R-Transformer with a various of sequence learning tasks from different domains and the empirical results demonstrate that R-Transformer achieves much stronger performance than both TCN and standard Transformer as well as other state-of-the-art sequence models.
The rest of the paper is organized as follows: Section 2 discusses the sequence modeling problem we aim to solve; The proposed R-Transformer model is presented in Section 3.
In Section 4, we describe the experimental details and discuss the results.
The related work is briefly reviewed in Section 5.
Section 6 concludes this work.
In summary, experimental results have shown that the standard Transformer can achieve better results than RNNs when sequences exhibit very long-term dependencies, i.e., sequential MNIST while its performance can drop dramatically when strong locality exists in sequences, i.e., polyphonic music and language.
Meanwhile, TCN is a very strong sequence model that can effectively learn both local structures and long-term dependencies and has very stable performance in different tasks.
More importantly, the proposed R-Transformer that combines a lower level LocalRNN and a higher level multi-head attention, outperforms both TCN and Transformer by a large margin consistently in most of the tasks.
The experiments are conducted on various sequential learning tasks with datasets from different domains.
Moreover, all experimental settings are fair to all baselines.
Thus, the observations from the experiments are reliable with the current experimental settings.
However, due to the computational limitations, we are currently restricted our evaluation settings to moderate model and dataset sizes.
Thus, more evaluations on big models and large datasets can make the results more convincing.
We would like to leave this as one future work.
In this paper, we propose a novel generic sequence model that enjoys the advantages of both RNN and the multi-head attention while mitigating their disadvantages.
Specifically, it consists of a LocalRNN that learns the local structures without suffering from any of the weaknesses of RNN and a multi-head attention pooling that effectively captures long-term dependencies without any help of position embeddings.
In addition, the model can be easily implemented with full parallelization over the positions in a sequence.
The empirical results on sequence modeling tasks from a wide range of domains have demonstrated the remarkable advantages of R-Transformer over state-of-the-art nonrecurrent sequence models such as TCN and standard Transformer as well as canonical recurrent architectures. | This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:621 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints.
This level of fine-grained control can be difficult to obtain in large-scale neural network models.
In this work, we propose a structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm.
Under this formulation, we can include a range of rich, posterior constraints to enforce task-specific knowledge that is effectively trained into the neural model.
This approach allows us to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.
Experiments consider applications of this approach for text generation and part-of-speech induction.
For natural language generation, we find that this method improves over standard benchmarks, while also providing fine-grained control. | A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:622 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons.
In this setting, can an adversary obtain the private samples if the classification model is given to the adversary?
We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack.
This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication).
For C2G attack, we introduce a novel GAN, PreImageGAN.
In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model $f$, $P(X|f(X)=y)$, where $X$ is the random variable on the sample space and $y$ is the probability vector representing the target label arbitrary specified by the adversary.
In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition.
In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images.
In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals.
Recent rapid advances in deep learning technologies are expected to promote the application of deep learning to online services with recognition of complex objects.
Let us consider the face recognition task as an example.
The probabilistic classification model f takes a face image x and the model predicts the probability of which the given face image is associated with an individual t, f (x) ≃ Pr[T = t|X = x].The
following three scenarios pose situations that probabilistic classification models need to revealed in public for online services in real applications:Prediction with cloud environment: Suppose an enterprise provides an online prediction service with a cloud environment, in which the service takes input from a user and returns predictions to the user in an online manner. The
enterprise needs to deploy the model f into the cloud to achieve this.Prediction with private information: Suppose an enterprise develops a prediction model f (e.g., disease risk prediction) and a user wishes to have a prediction of the model with private input (e.g., personal genetic information). The
most straightforward way to preserve the user's privacy entirely is to let the user download the entire model and perform prediction on the user side locally.Offline prediction: Automatic driving cars or laptops with face authentication contain face/object recognition systems in the device. Since
these devices are for mobile use and need to work standalone, the full model f needs to be embedded in the device.In such situations that classification model f is revealed, we consider a reverse-engineering problem of models with deep architectures. Let D
tr and d X,T be a set of training samples and its underlying distribution, respectively. Let f
be a model trained with D tr . In this
situation, is it possible for an adversary to obtain the training samples D tr (or its underlying distribution d X,T ) if the classification model is given to the adversary?. If this
is possible, this can cause serious problems, particularly when D tr or d X,T is private or confidential information.Privacy violation by releasing face authentication: Let us consider the face authentication task as an example again. Suppose an adversary is given the classification model f . The adversary aims to estimate the data (face) distribution of a target individual t * , d X|T =t * . If this kind of reverseengineering works successfully, serious privacy violation arises because individual faces are private information. Furthermore, once d X|T =t * is revealed, the adversary can draw samples from d X|T =t * , which would cause another privacy violation (say, the adversary can draw an arbitrary number of the target's face images).Confidential information leakage by releasing object recognizer: Let us consider an object recognition system for automatic driving cars. Suppose a model f takes as input images from car-mounted cameras and detect various objects such as traffic signs or traffic lights. Given f , the reverse engineering reveals the sample distribution of the training samples, which might help adversaries having malicious intentions. For example, generation of adversarial examples that make the recognition system confuse without being detected would be possible. Also, this kind of attack allows exposure of hidden functionalities for privileged users or unexpected vulnerabilities of the system.If this kind of attack is possible, it indicates that careful treatment is needed before releasing model f in public considering that publication of f might cause serious problems as listed above. We name this type of reverse engineering classifier-to-generator (C2G) attack . In principle, estimation of labeled sample distributions from a classification/recognition model of complex objects (e.g., face images) is a difficult task because of the following two reasons. First, estimation of generative models of complex objects is believed to be a challenging problem itself. Second, model f often does not contain sufficient information to estimate the generative model of samples. In supervised classification, the label space is always much more abstract than the sample space. The classification model thus makes use of only a limited amount of information in the sample space that is sufficient to classify objects into the abstract label space. In this sense, it is difficult to estimate the sample distribution given only classification model f .To resolve the first difficulty, we employ Generative Adversarial Networks (GANs). GANs are a neural network architecture for generative models which has developed dramatically in the field of deep learning. Also, we exploit one remarkable property of GANs, the ability to interpolate latent variables of inputs. With this interpolation, GANs can generate samples (say, images) that are not included in the training samples, but realistic samples 1 .Even with this powerful generation ability of GANs, it is difficult to resolve the second difficulty. To overcome this for the C2G attack, we assume that the adversary can make use of unlabeled auxiliary samples D aux as background knowledge. Suppose f be a face recognition model that recognizes Alice and Bob, and the adversary tries to extract Alice's face image from f . It is natural to suppose that the adversary can use public face image samples that do not contain Alice's and Bob's face images as D aux . PreImageGAN exploits unlabeled auxiliary samples to complement knowledge extracted from the model f .
As described in this paper, we formulated the Classifier-to-Generator (C2G) Attack, which estimates the training sample distribution ρ t * from given classification model f and auxiliary dataset D tr .
As an algorithm for C2G attack, we proposed PreImageGAN which is based on ACGAN and WGAN.
Fig.
7 represents the results of the C2G attack when the auxiliary data consists of noisy images which are drawn from the uniform distribution.
All generated images look like noise images, not numeric letters.
This result reveals that the C2G attack fails when the auxiliary dataset is not sufficiently informative.
More specifically, we can consider the C2G attack fails when the attacker does not have appropriate background knowledge of the training data distribution.(a
) t * = 0 DISPLAYFORM0 Figure 7: Images generated by the C2G attack when the target label is set as t * = 0, 1, 2 and uniformly generated noise images are used as the auxiliary dataset. We
used an alphanumeric letter classifier (label num:62) described in Sec. 5.2 as f for this experiment. | Estimation of training data distribution from trained classifier using GAN. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:623 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.
Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near the range of a suitably-chosen generative model.
In particular, in (Bora {\em et al.}, 2017) it was shown that roughly $O(k\log L)$ random Gaussian measurements suffice for accurate recovery when the $k$-input generative model is bounded and $L$-Lipschitz, and that $O(kd \log w)$ measurements suffice for $k$-input ReLU networks with depth $d$ and width $w$. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis.
In accordance with the above upper bounds, our results are summarized as follows: (i) We construct an $L$-Lipschitz generative model capable of generating group-sparse signals, and show that the resulting necessary number of measurements is $\Omega(k \log L)$;
(ii) Using similar ideas, we construct two-layer ReLU networks of high width requiring $\Omega(k \log w)$ measurements, as well as lower-width deep ReLU networks requiring $\Omega(k d)$ measurements.
As a result, we establish that the scaling laws derived in (Bora {\em et al.}, 2017) are optimal or near-optimal in the absence of further assumptions.
The problem of sparse estimation via linear measurements (commonly referred to as compressive sensing) is well-understood, with theoretical developments including sharp performance bounds for both practical algorithms [1, 2, 3, 4] and (potentially intractable) information-theoretically optimal algorithms [5, 6, 7, 8] .
Following the tremendous success of deep generative models in a variety of applications [9] , a new perspective on compressive sensing was recently introduced, in which the sparsity assumption is replaced by the assumption of the underlying signal being well-modeled by a generative model (typically corresponding to a deep neural network) [10] .
This approach was seen to exhibit impressive performance in experiments, with reductions in the number of measurements by large factors such as 5 to 10 compared to sparsity-based methods.
In addition, [10] provided theoretical guarantees on their proposed algorithm, essentially showing that an L-Lipschitz generative model with bounded k-dimensional inputs leads to reliable recovery with m = O(k log L) random Gaussian measurements (see Section 2 for a precise statement).
Moreover, for a ReLU network generative model from R k to R n with width w and depth d, it suffices to have m = O(kd log w).
A variety of follow-up works provided additional theoretical guarantees (e.g., for more specific optimization algorithms [11, 12] , more general models [13] , or under random neural network weights [14, 15] ) for compressive sensing with generative models, but the main results of [10] are by far the most relevant to ours.
In this paper, we address a prominent gap in the existing literature by establishing algorithmindependent lower bounds on the number of measurements needed (e.g., this is explicitly posed as an open problem in [15] ).
Using tools from minimax statistical analysis, we show that for generative models satisfying the assumptions of [10] , the above-mentioned dependencies m = O(k log L) and m = O(kd log w) cannot be improved (or in the latter case, cannot be improved by more than a log n factor) without further assumptions.
Our argument is essentially based on a reduction to compressive sensing with a group sparsity model (e.g., see [16] ), i.e., forming a neural network that is capable of producing such signals.
The proofs are presented in the full paper [17] .
We have established, to our knowledge, the first lower bounds on the sample complexity for compressive sensing with generative models.
To achieve these, we constructed generative models capable of producing group-sparse signals, and then applied a minimax lower bound for group-sparse recovery.
For bounded Lipschitz-continuous generative models we matched the O(k log L) scaling law derived in [10] , and for ReLU-based generative models, we showed that the dependence of the O(kd log w) bound from [10] has an optimal or near-optimal dependence on both the width and depth.
A possible direction for future research is to understand what additional assumptions could be placed on the generative model to further reduce the sample complexity. | We establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:624 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents.
Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning.
We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences.We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data.
Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions.
Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches.
More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
Many machine learning algorithms are rooted in discovering patterns of correlation in data.
While this has been sufficient to excel in several areas BID20 BID7 , sometimes the problems we are interested in are fundamentally causal.
Answering questions such as "Does smoking cause cancer?" or "Was this person denied a job due to racial discrimination?" or "Did this marketing campaign cause sales to go up?" all require an ability to reason about causes and effects and cannot be achieved by purely associative inference.
Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding.
Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds BID23 BID14 BID15 BID5 BID21 .
There is a rich literature on formal approaches for defining and performing causal reasoning BID29 BID33 BID8 BID30 ).Here
we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The
approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We
adopt the specific method of BID9 and BID35 , training a recurrent neural network (RNN) through model-free reinforcement learning. We
train on a large family of tasks, each underpinned by a different causal structure.The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge BID1 BID35 BID11 . Additionally
, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required.
This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning.
This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings.
Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both BID26 BID6 BID27 BID32 BID12 BID22 , inducing models often requires assumptions that are difficult to fit to complex real-world conditions.
By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required.
Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task-i.e. to perform active learning.
In our experimental domain, our agents' active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations.
Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive.
To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks.
Though the results here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. BID16 BID17 that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. BID3 BID0 or have richer semantics (e.g. BID13 , that more explicitly leverage symmetries like equivalance classes in the environment.
We can also compare the performance of these agents to two standard model-free RL baselines.
The Q-total agent learns a Q-value for each action across all steps for all the episodes.
The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [o t ,a t−1 ,r t−1 ], but with no LSTM memory to store previous actions and observations.
Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, resulting in a considerably negative reward.
The Q-episode agent essentially makes sure to not choose the arm that is indicated by m t to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0. | meta-learn a learning algorithm capable of causal reasoning | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:625 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews.
To employ rule based sentiment classification, we require sentiment lexicons.
However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages.
To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach.
The intention of this approach is to handle sentiment terms specific to Amharic language from Amharic Corpus.
Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb.
We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus.
Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding (i.e. Positive Point-wise Mutual Information PPMI).
First we build word-context unigram frequency count matrix and transform it to point-wise mutual Information matrix.
Using this matrix, we computed the cosine distance of mean vector of seed lists and each word in the corpus vocabulary.
Based on the threshold value, the top closest words to the mean vector of seed list are added to the lexicon.
Then the mean vector of the new sentiment seed list is updated and process is repeated until we get sufficient terms in the lexicon.
Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds.
Finally, the lexicon generated in corpus based approach is evaluated.
Most of sentiment mining research papers are associated to English languages.
Linguistic computational resources in languages other than English are limited.
Amharic is one of resource limited languages.
Due to the advancement of World Wide Web, Amharic opinionated texts is increasing in size.
To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence, government intelligence, market intelligence, or support decision making.
For carrying out Amharic sentiment classification, the availability of sentiment lexicons is crucial.
To-date, there are two generated Amharic sentiment lexicons.
These are manually generated lexicon(1000) (Gebremeskel, 2010) and dictionary based Amharic SWN and SOCAL lexicons (Neshir Alemneh et al., 2019) .
However, dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language.
For example, Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons.
The word ጉርሻ/"feed in other people with hands which expresses love and live in harmony with others"/ in the Amharic text: "እንደ ጉርሻ ግን የሚያግባባን የለም. . . ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ።" has positive connotation or positive sentiment.
But the dictionary meaning of the word ጉርሻ is "bonus".
This is far away from the cultural connotation that it is intended to represent and express.
We assumed that such kind of culture (or language specific) words are found in a collection of Amharic texts.
However, dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic.
Thus, this work builds corpus based algorithm to handle language and culture specific words in the lexicons.
However, it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic.
But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available.
Due to this reason, the lexicon built using this approach is usually used for lexicon based sentiment analysis in the same domain from which it is built.
The research questions to be addressed utilizing this approach are: (1) How can we build an approach to generate Amharic Sentiment Lexicon from corpus?
(2)How do we evaluate the validity and quality of the generated lexicon?
In this work, we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly.
The corpora are collected from different local news media organizations and also from facebook news' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon.
Using corpus based approach, Amharic sentiment lexicon is built where it allows finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach.
In this section, we have attempted to develop new approaches to bootstrapping relying on word-context semantic space representation of large Amharic corpora.
Creating a sentiment lexicon generation is not an objective process.
The generated lexicon is dependent on the task it is applied.
Thus, in this work we have seen that it is possible to create Sentiment lexicon for low resourced languages from corpus.
This captures the language specific features and connotations related to the culture where the language is spoken.
This can not be handled using dictionary based approach that propagates labels from resource rich languages.
To the best of our knowledge, the the PPMI based approach to generate Amharic Sentiment lexicon form corpus is performed for first time for Amharic language with almost minimal costs and time.
Thus, the generated lexicons can be used in combination with other sentiment lexicons to enhance the performance of sentiment classifications in Amharic language.
The approach is a generic approach which can be adapted to other resource limited languages to reduce cost of human annotation and the time it takes to annotated sentiment lexicons.
Though the PPMI based Amharic Sentiment lexicon outperforms the manual lexicon, prediction (word embedding) based approach is recommended to generate sentiment lexicon for Amharic language to handle context sensitive terms. | Corpus based Algorithm is developed generate Amharic Sentiment lexicon relying on corpus | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:626 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL).
In the tabular case, all provably efficient model-free algorithms rely on it.
However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms.
In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation.
Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration.
We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network.
We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting.
Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping.
We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.
In reinforcement learning (RL), exploration is crucial for gathering sufficient data to infer a good control policy.
As environment complexity grows, exploration becomes more challenging and simple randomisation strategies become inefficient.
While most provably efficient methods for tabular RL are model-based (Brafman and Tennenholtz, 2002; Strehl and Littman, 2008; Azar et al., 2017) , in deep RL, learning models that are useful for planning is notoriously difficult and often more complex (Hafner et al., 2019) than modelfree methods.
Consequently, model-free approaches have shown the best final performance on large complex tasks (Mnih et al., 2015; 2016; Hessel et al., 2018) , especially those requiring hard exploration (Bellemare et al., 2016; Ostrovski et al., 2017) .
Therefore, in this paper, we focus on how to devise model-free RL algorithms for efficient exploration that scale to large complex state spaces and have strong theoretical underpinnings.
Despite taking inspiration from tabular algorithms, current model-free approaches to exploration in deep RL do not employ optimistic initialisation, which is crucial to provably efficient exploration in all model-free tabular algorithms.
This is because deep RL algorithms do not pay special attention to the initialisation of the neural networks and instead use common initialisation schemes that yield initial Q-values around zero.
In the common case of non-negative rewards, this means Q-values are initialised to their lowest possible values, i.e., a pessimistic initialisation.
While initialising a neural network optimistically would be trivial, e.g., by setting the bias of the final layer of the network, the uncontrolled generalisation in neural networks changes this initialisation quickly.
Instead, to benefit exploration, we require the Q-values for novel state-action pairs must remain high until they are explored.
An empirically successful approach to exploration in deep RL, especially when reward is sparse, is intrinsic motivation (Oudeyer and Kaplan, 2009) .
A popular variant is based on pseudocounts (Bellemare et al., 2016) , which derive an intrinsic bonus from approximate visitation counts over states and is inspired by the tabular MBIE-EB algorithm (Strehl and Littman, 2008) .
However, adding a positive intrinsic bonus to the reward yields optimistic Q-values only for state-action pairs that have already been chosen sufficiently often.
Incentives to explore unvisited states rely therefore on the generalisation of the neural network.
Exactly how the network generalises to those novel state-action pairs is unknown, and thus it is unclear whether those estimates are optimistic when compared to nearby visited state-action pairs.
Figure 1 Consider the simple example with a single state and two actions shown in Figure 1 .
The left action yields +0.1 reward and the right action yields +1 reward.
An agent whose Q-value estimates have been zero-initialised must at the first time step select an action randomly.
As both actions are underestimated, this will increase the estimate of the chosen action.
Greedy agents always pick the action with the largest Q-value estimate and will select the same action forever, failing to explore the alternative.
Whether the agent learns the optimal policy or not is thus decided purely at random based on the initial Q-value estimates.
This effect will only be amplified by intrinsic reward.
To ensure optimism in unvisited, novel state-action pairs, we introduce Optimistic Pessimistically Initialised Q-Learning (OPIQ).
OPIQ does not rely on an optimistic initialisation to ensure efficient exploration, but instead augments the Q-value estimates with count-based bonuses in the following manner:
where N (s, a) is the number of times a state-action pair has been visited and M, C > 0 are hyperparameters.
These Q + -values are then used for both action selection and during bootstrapping, unlike the above methods which only utilise Q-values during these steps.
This allows OPIQ to maintain optimism when selecting actions and bootstrapping, since the Q + -values can be optimistic even when the Q-values are not.
In the tabular domain, we base OPIQ on UCB-H (Jin et al., 2018) , a simple online Q-learning algorithm that uses count-based intrinsic rewards and optimistic initialisation.
Instead of optimistically initialising the Q-values, we pessimistically initialise them and use Q + -values during action selection and bootstrapping.
Pessimistic initialisation is used to enable a worst case analysis where all of our Q-value estimates underestimate Q * and is not a requirement for OPIQ.
We show that these modifications retain the theoretical guarantees of UCB-H.
Furthermore, our algorithm easily extends to the Deep RL setting.
The primary difficulty lies in obtaining appropriate state-action counts in high-dimensional and/or continuous state spaces, which has been tackled by a variety of approaches (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Machado et al., 2018a) and is orthogonal to our contributions.
We demonstrate clear performance improvements in sparse reward tasks over
1) a baseline DQN that just uses intrinsic motivation derived from the approximate counts,
2) simpler schemes that aim for an optimistic initialisation when using neural networks, and
3) strong exploration baselines.
We show the importance of optimism during action selection for ensuring efficient exploration.
Visualising the predicted Q + -values shows that they are indeed optimistic for novel state-action pairs.
This paper presented OPIQ, a model-free algorithm that does not rely on an optimistic initialisation to ensure efficient exploration.
Instead, OPIQ augments the Q-values estimates with a count-based optimism bonus.
We showed that this is provably efficient in the tabular setting by modifying UCB-H to use a pessimistic initialisation and our augmented Q + -values for action selection and bootstrapping.
Since our method does not rely on a specific initialisation scheme, it easily scales to deep RL when paired with an appropriate counting scheme.
Our results showed the benefits of maintaining optimism both during action selection and bootstrapping for exploration on a number of hard sparse reward environments including Montezuma's Revenge.
In future work, we aim to extend OPIQ by integrating it with more expressive counting schemes. | We augment the Q-value estimates with a count-based bonus that ensures optimism during action selection and bootstrapping, even if the Q-value estimates are pessimistic. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:627 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks.
However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning.
Both of these challenges severely limit the applicability of such methods to complex, real-world domains.
In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.
In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible.
Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods.
By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods.
Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.
We presented soft actor-critic (SAC), an off-policy maximum entropy deep reinforcement learning algorithm that provides sample-efficient learning while retaining the benefits of entropy maximization and stability.
Our theoretical results derive soft policy iteration, which we show to converge to the optimal policy.
From this result, we can formulate a soft actor-critic algorithm, and we empirically show that it outperforms state-of-the-art model-free deep RL methods, including the off-policy DDPG algorithm and the on-policy TRPO algorithm.
In fact, the sample efficiency of this approach actually exceeds that of DDPG by a substantial margin.
Our results suggest that stochastic, entropy maximizing reinforcement learning algorithms can provide a promising avenue for improved robustness and stability, and further exploration of maximum entropy methods, including methods that incorporate second order information (e.g., trust regions BID21 ) or more expressive policy classes is an exciting avenue for future work. | We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:628 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations.
The most common approaches under this framework are Behaviour Cloning (BC), and Inverse Reinforcement Learning (IRL).
Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail.
Unfortunately, directly comparing the algorithms for these methods does not provide adequate intuition for understanding this difference in performance.
This is the motivating factor for our work.
We begin by presenting $f$-MAX, a generalization of AIRL (Fu et al., 2018), a state-of-the-art IRL method.
$f$-MAX provides grounds for more directly comparing the objectives for LfD.
We demonstrate that $f$-MAX, and by inheritance AIRL, is a subset of the cost-regularized IRL framework laid out by Ho & Ermon (2016).
We conclude by empirically evaluating the factors of difference between various LfD objectives in the continuous control domain.
Modern advances in reinforcement learning aim to alleviate the need for hand-engineered decisionmaking and control algorithms by designing general purpose methods that learn to optimize provided reward functions.
In many cases however, it is either too challenging to optimize a given reward (e.g. due to sparsity of signal), or it is simply impossible to design a reward function that captures the intricate details of desired outcomes.
One approach to overcoming such hurdles is Learning from Demonstrations (LfD), where algorithms are provided with expert demonstrations of how to accomplish desired tasks.The most common approaches in the LfD framework are Behaviour Cloning (BC) and Inverse Reinforcement Learning (IRL) BID22 BID15 .
In standard BC, learning from demonstrations is treated as a supervised learning problem and policies are trained to regress expert actions from a dataset of expert demonstrations.
Other forms of Behaviour Cloning, such as DAgger BID21 , consider how to make use of an expert in a more optimal fashion.
On the other hand, in IRL the aim is to infer the reward function of the expert, and subsequently train a policy to optimize this reward.
The motivation for IRL stems from the intuition that the reward function is the most concise and portable representation of a task BID15 BID0 .Unfortunately
, the standard IRL formulation BID15 faces degeneracy issues 1 . A successful
framework for overcoming such challenges is the Maximum-Entropy (Max-Ent) IRL method BID28 BID27 . A line of research
stemming from the Max-Ent IRL framework has lead to recent "adversarial" methods BID12 BID4 BID7 1 for example, any policy is optimal for the constant reward function r(s, a) = 0 2 BACKGROUND
The motivation for this work stemmed from the superior performance of recent direct Max-Ent IRL methods BID12 BID7 compared to BC in the low-data regime, and the desire to understand the relation between various approaches for Learning from Demonstrations.
We first presented f -MAX, a generalization of AIRL BID7 , which allowed us to interpret AIRL as optimizing for KL (ρ π (s, a)||ρ exp (s, a)).
We demonstrated that f -MAX, and by inhertance AIRL, is a subset of the cost-regularized IRL framework laid out by BID12 .
Comparing to the standard BC objective, E ρ exp (s) [KL (ρ exp (a|s)||ρ π (a|s))], we hypothesized two reasons for the superior performance of AIRL:
1) the additional terms in the objective encouraging the matching of marginal state distributions, and
2) the direction of the KL divergence being optimized.
Setting out to empirically evaluate these claims we presented FAIRL, a one-line modification of the AIRL algorithm that optimizes KL (ρ exp (s, a)||ρ π (s, a)).
FAIRL outperformed BC in a similar fashion to AIRL, which allowed us to conclude the key factor being the matching of state marginals.
Additional comparisons between FAIRL and AIRL provided initial understanding about the role of the direction of the KL being optimized.
In future work we aim to produce results on a more diverse set of more challenging environments.
Additionally, evaluating other choices of f -divergence beyond forward and reverse KL may present interesting avenues for improvement BID26 .
Lastly, but importantly, we would like to understand whether the mode-covering behaviour of FAIRL could result in more robust policies BID19 .A
SOME USEFUL IDENTITIES Let h : S × A → R be an arbitrary function. If
all episodes have the same length T , we have, DISPLAYFORM0 DISPLAYFORM1 In a somewhat similar fashion, in the infinite horizon case with fixed probability γ ∈ (0, 1)
of transitioning to a terminal state, for the discounted sum below we have, DISPLAYFORM2 DISPLAYFORM3 where Γ := 1 1−γ is the normalizer of the sum t γ t . Since
the integral of an infinite series is not always equal to the infinite series of integrals, some analytic considerations must be made to go from equation 34 to 35. But,
one simple case in which it holds is when the ranges of h and all ρ π (s t , a t ) are bounded. | Distribution matching through divergence minimization provides a common ground for comparing adversarial Maximum-Entropy Inverse Reinforcement Learning methods to Behaviour Cloning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:629 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks.
Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\%.
Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.
We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics.
We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks.
We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them.
We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.
Recurrent Neural Networks (RNNs) achieve state-of-the-art performance on a wide range of sequence prediction tasks BID0 BID22 BID50 BID32 .
In this work we examine how to add uncertainty and regularisation to RNNs by means of applying Bayesian methods to training.
This approach allows the network to express uncertainty via its parameters.
At the same time, by using a prior to integrate out the parameters to average across many models during training, it gives a regularisation effect to the network.
Recent approaches either justify dropout BID43 and weight decay as a variational inference scheme BID12 , or apply Stochastic Gradient Langevin dynamics (Welling & Teh, 2011, SGLD) to truncated backpropagation in time directly BID13 .
Interestingly, recent work has not explored further directly applying a variational Bayes inference scheme BID3 for RNNs as was done in BID14 .
We derive a straightforward approach based upon Bayes by Backprop that we show works well on large scale problems.
Our strategy is a simple alteration to truncated backpropagation through time that results in an estimate of the posterior distribution on the weights of the RNN.
This formulation explicitly leads to a cost function with an information theoretic justification by means of a bits-back argument BID18 where a KL divergence acts as a regulariser.The form of the posterior in variational inference shapes the quality of the uncertainty estimates and hence the overall performance of the model.
We shall show how performance of the RNN can be improved by means of adapting ("sharpening") the posterior locally to a batch.
This sharpening adapts the variational posterior to a batch of data using gradients based upon the batch.
This can be viewed as a hierarchical distribution, where a local batch gradient is used to adapt a global posterior, forming a local approximation for each batch.
This gives a more flexible form to the typical assumption of Gaussian posterior when variational inference is applied to neural networks, which reduces variance.
This technique can be applied more widely across other Bayesian models.The contributions of our work are as follows:• We show how Bayes by Backprop (BBB) can be efficiently applied to RNNs.•
We develop a novel technique which reduces the variance of BBB, and which can be widely adopted in other maximum likelihood frameworks.•
We improve performance on two widely studied benchmarks outperforming established regularisation techniques such as dropout by a big margin.•
We introduce a new benchmark for studying uncertainty of language models.
We have shown how to apply the Bayes by Backprop (BBB) technique to RNNs.
We enhanced it further by introducing the idea of posterior sharpening: a hierarchical posterior on the weights of neural networks that allows a network to adapt locally to batches of data by a gradient of the model.We showed improvements over two open source, widely available models in the language modelling and image captioning domains.
We demonstrated that not only do BBB RNNs often have superior performance to their corresponding baseline model, but are also better regularised and have superior uncertainty properties in terms of uncertainty on out-of-distribution data.
Furthermore, BBB RNNs through their uncertainty estimates show signs of knowing what they know, and when they do not, a critical property for many real world applications such as self-driving cars, healthcare, game playing, and robotics.
Everything from our work can be applied on top of other enhancements to RNN/LSTM models (and other non-recurrent architectures), and the empirical evidence combined with improvements such as posterior sharpening makes variational Bayes methods look very promising.
We are exploring further research directions and wider adoption of the techniques presented in our work. | Variational Bayes scheme for Recurrent Neural Networks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:63 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL).
In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL.
In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures.
Specifically, we investigate the low-rank structure, which widely exists for big data matrices.
We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games).
As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on ''low-rank'' tasks.
Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.
Value-based methods are widely used in control, planning and reinforcement learning (Gorodetsky et al., 2018; Alora et al., 2016; Mnih et al., 2015) .
To solve a Markov Decision Process (MDP), one common method is value iteration, which finds the optimal value function.
This process can be done by iteratively computing and updating the state-action value function, represented by Q(s, a) (i.e., the Q-value function).
In simple cases with small state and action spaces, value iteration can be ideal for efficient and accurate planning.
However, for modern MDPs, the data that encodes the value function usually lies in thousands or millions of dimensions (Gorodetsky et al., 2018; 2019) , including images in deep reinforcement learning (Mnih et al., 2015; Tassa et al., 2018) .
These practical constraints significantly hamper the efficiency and applicability of the vanilla value iteration.
Yet, the Q-value function is intrinsically induced by the underlying system dynamics.
These dynamics are likely to possess some structured forms in various settings, such as being governed by partial differential equations.
In addition, states and actions may also contain latent features (e.g., similar states could have similar optimal actions).
Thus, it is reasonable to expect the structured dynamic to impose a structure on the Q-value.
Since the Q function can be treated as a giant matrix, with rows as states and columns as actions, a structured Q function naturally translates to a structured Q matrix.
In this work, we explore the low-rank structures.
To check whether low-rank Q matrices are common, we examine the benchmark Atari games, as well as 4 classical stochastic control tasks.
As we demonstrate in Sections 3 and 4, more than 40 out of 57 Atari games and all 4 control tasks exhibit low-rank Q matrices.
This leads us to a natural question: How do we leverage the low-rank structure in Q matrices to allow value-based techniques to achieve better performance on "low-rank" tasks?
We propose a generic framework that allows for exploiting the low-rank structure in both classical planning and modern deep RL.
Our scheme leverages Matrix Estimation (ME), a theoretically guaranteed framework for recovering low-rank matrices from noisy or incomplete measurements (Chen & Chi, 2018) .
In particular, for classical control tasks, we propose Structured Value-based Planning (SVP).
For the Q matrix of dimension |S| × |A|, at each value iteration, SVP randomly updates a small portion of the Q(s, a) and employs ME to reconstruct the remaining elements.
We show that planning problems can greatly benefit from such a scheme, where much fewer samples (only sample around 20% of (s, a) pairs at each iteration) can achieve almost the same policy as the optimal one.
For more advanced deep RL tasks, we extend our intuition and propose Structured Value-based Deep RL (SV-RL), applicable for any value-based methods such as DQN (Mnih et al., 2015) .
Here, instead of the full Q matrix, SV-RL naturally focuses on the "sub-matrix", corresponding to the sampled batch of states at the current iteration.
For each sampled Q matrix, we again apply ME to represent the deep Q learning target in a structured way, which poses a low rank regularization on this "sub-matrix" throughout the training process, and hence eventually the Q-network's predictions.
Intuitively, as learning a deep RL policy is often noisy with high variance, if the task possesses a low-rank property, this scheme will give a clear guidance on the learning space during training, after which a better policy can be anticipated.
We confirm that SV-RL indeed can improve the performance of various value-based methods on "low-rank" Atari games: SV-RL consistently achieves higher scores on those games.
Interestingly, for complex, "high-rank" games, SV-RL performs comparably.
ME naturally seeks solutions that balance low rank and a small reconstruction error (cf. Section 3.1).
Such a balance on reconstruction error helps to maintain or only slightly degrade the performance for "high-rank" situation.
We summarize our contributions as follows:
• We are the first to propose a framework that leverages matrix estimation as a general scheme to exploit the low-rank structures, from planning to deep reinforcement learning.
• We demonstrate the effectiveness of our approach on classical stochastic control tasks, where the low-rank structure allows for efficient planning with less computation.
• We extend our scheme to deep RL, which is naturally applicable for any value-based techniques.
Across a variety of methods, such as DQN, double DQN, and dueling DQN, experimental results on all Atari games show that SV-RL can consistently improve the performance of value-based methods, achieving higher scores for tasks when low-rank structures are confirmed to exist.
We investigated the structures in value function, and proposed a complete framework to understand, validate, and leverage such structures in various tasks, from planning to deep reinforcement learning.
The proposed SVP and SV-RL algorithms harness the strong low-rank structures in the Q function, showing consistent benefits for both planning tasks and value-based deep reinforcement learning techniques.
Extensive experiments validated the significance of the proposed schemes, which can be easily embedded into existing planning and RL frameworks for further improvements.
randomly sample a set Ω of observed entries from S × A, each with probability p 4: / * update the randomly selected state-action pairs * / 5:
end for 8:
/ * reconstruct the Q matrix via matrix estimation * / 9:
apply matrix completion to the observed values {Q(s, a)} (s,a)∈Ω to reconstruct Q (t+1) : | We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:630 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties.
At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information.
Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks.
This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity.
The benchmark is based on thousands of ratings gathered by surveying 500 software developers.
We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools.
Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness.
On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools.
IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations.
Reasoning about source code based on learned representations has various applications, such as predicting method names (Allamanis et al., 2015) , detecting bugs (Pradel & Sen, 2018) and vulnerabilities (Harer et al., 2018) , predicting types (Malik et al., 2019) , detecting similar code (White et al., 2016; Xu et al., 2017) , inferring specifications (DeFreez et al., 2018) , code de-obfuscation (Raychev et al., 2015; Alon et al., 2018a) , and program repair (Devlin et al., 2017) .
Many of these techniques are based on embeddings of source code, which map a given piece of code into a continuous vector representation that encodes some aspect of the semantics of the code.
A core component of most code embeddings are semantic representations of identifier names, i.e., the names of variables, functions, classes, fields, etc. in source code.
Similar to words in natural languages, identifiers are the basic building block of source code.
Identifiers not only account for the majority of the vocabulary of source code, but they also convey important information about the (intended) meaning of code.
To reason about identifiers and their meaning, code analysis techniques build on learned embeddings of identifiers, either by adapting embeddings that were originally proposed for natural languages (Mikolov et al., 2013a; or with embeddings specifically designed for source code (Alon et al., 2018a) .
Given the importance of identifier embeddings, a crucial challenge is measuring how effective an embedding represents the semantic relationships between identifiers.
For word embeddings in natural language, the community has addressed this question through a series of gold standards (Finkelstein et al., 2002; Bruni et al., 2014a; Rubenstein & Goodenough, 1965; Miller & Charles, 1991; Hill et al., 2015; Gerz et al., 2016) .
These gold standards define how similar two words are based on ratings by human judges, enabling an evaluation that measures how well an embedding reflects the human ratings.
Unfortunately, simply reusing existing gold standards to identifiers in source code would be misleading.
One reason is that the vocabularies of natural languages and source code overlap only partially, because source code contains various terms and abbreviations not found in natural language texts.
Moreover, source code has a constantly growing vocabulary, as developers tend to invent new identifiers, e.g., for newly emerging application domains Babii et al. (2019) .
Finally, even words present in both natural languages and source code may differ in their meaning due to computer science-specific meanings of some words, e.g., "float" or "string".
This paper addresses the problem of measuring and comparing the effectiveness of embeddings of identifiers.
We present IdBench, a benchmark for evaluating techniques that represent semantic similarities of identifiers.
The basis of the benchmark is a dataset of developer opinions about the similarity of pairs of identifiers.
We gather this dataset through two surveys that show realworld identifiers and code snippets to hundreds of developers, asking them to rate their similarity.
Taking the developer opinions as a gold standard, IdBench allows for evaluating embeddings in a systematic way by measuring to what extent an embedding agrees with ratings given by developers.
Moreover, inspecting pairs of identifiers for which an embedding strongly agrees or disagrees with the benchmark helps understand the strengths and weaknesses of current embeddings.
Overall, we gather thousands of ratings from 500 developers.
Cleaning and compiling this raw dataset into a benchmark yields several hundreds of pairs of identifiers with gold standard similarities, including identifiers from a wide range of application domains.
We apply our approach to a corpus of JavaScript code, because several recent pieces of work on identifier names and code embeddings focus on this language (Pradel & Sen, 2018; Alon et al., 2018b; a; Malik et al., 2019) .
Applying our methodology to another language is straightforward.
Based on the newly created benchmark, we evaluate and compare state-of-the-art embeddings of identifiers.
We find that different embedding techniques differ heavily in terms of their ability to accurately represent identifier relatedness and similarity.
The best available technique, the CBOW variant of FastText, accurately represents relatedness, but none of the available techniques accurately represents identifier similarities.
One reason is that some embeddings are confused about identifiers with opposite meaning, e.g., rows and cols, and about identifiers that belong to the same application domain but are not similar.
Another reason is that some embeddings miss synonyms, e.g., file and record.
We also find that simple string distance functions, which measure the similarity of identifiers without any learning, are surprisingly effective, and even outperform some learned embeddings for the similarity task.
In summary, this paper makes the following contributions.
(1) Methodology: To the best of our knowledge, we are the first to systematically evaluate embeddings of identifiers.
Our methodology is based on surveying developers and summarizing their opinions into gold standard similarities of pairs of identifiers.
(2) Reusable benchmark: We make available a benchmark of hundreds of pairs of identifiers, providing a way to systematically evaluate existing and future embeddings.
While the best available embeddings are highly effective at representing relatedness, none of the studied techniques reaches the same level of agreement for similarity.
In fact, even the best results in Figures 4b and 4c (39%) clearly stay beyond the IRA of our benchmark (62%), showing a huge potential for improvement.
For many applications of embeddings of identifiers, semantic similarity is crucial.
For example, tools to suggest suitable variable or method names (Allamanis et al., 2015; Alon et al., 2018a) aim for the name that is most similar, not only most related, to the concept represented by the variable or method.
Likewise, identifier name-based tools for finding programming errors (Pradel & Sen, 2018) or variable misuses (Allamanis et al., 2017) want to identify situations where the developer uses a wrong, but perhaps related, variable.
The lack of embeddings that accurately represent the semantic similarities of identifiers motivates more work on embedding techniques suitable for this task.
To better understand why current embeddings sometimes fail to accurately represent similarities, Table 1 shows the most similar identifiers of selected identifiers according to the FastText-cbow and path-based embeddings.
The examples illustrate two observations.
First, FastText, due to its use of n-grams (Bojanowski et al., 2017) , tends to cluster identifiers based on lexical commonalities.
While many lexically similar identifiers are also semantically similar, e.g., substr and substring, this approach misses other synonyms, e.g., item and entry.
Another downside is that lexical similarity may also establish wrong relationships.
For example, substring and substrCount represent different concepts, but FastText finds them to be highly similar.
Second, in contrast to FastText, path-based embeddings tend to cluster words based on their structural and syntactical contexts.
This approach helps the embeddings to identify synonyms despite their lexical differences, e.g., count and total, or files and records.
The downside is that it also clusters various related but not similar identifiers, e.g., minText and maxText, or substr and getPadding.
Some of these identifiers even have opposing meanings, e.g., rows and cols, which can mislead code analysis tools when reasoning about the semantics of code.
A somewhat surprising result is that simple string distance functions achieve a level of agreement with IdBench's similarity gold standards as high as some learned embeddings.
The reason why string distance functions sometimes correctly identify semantic similarities is that some semantically similar identifiers are also be lexically similar.
One downside of lexical approaches is that they miss synonymous identifiers, e.g., count and total.
This paper presents the first benchmark for evaluating vector space embeddings of identifiers names, which are at the core of many machine learning models of source code.
We compile thousands of ratings gathered from 500 developers into three benchmarks that provide gold standard similarity scores representing the relatedness, similarity, and contextual similarity of identifiers.
Using IdBench to experimentally compare five embedding techniques and two string distance functions shows that these techniques differ significantly in their agreement with our gold standard.
The best available embedding is very effective at representing how related identifiers are.
However, all studied techniques show huge room for improvement in their ability to represent how similar identifiers are.
IdBench will help steer future efforts on improved embeddings of identifiers, which will eventually enable better machine learning models of source code. | A benchmark to evaluate neural embeddings of identifiers in source code. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:631 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget.
This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions.
First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data.
To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD.
Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function.
The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets.
Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions.
The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization.
Generative adversarial nets (GANs) BID7 ) are a branch of generative models that learns to mimic the real data generating process.
GANs have been intensively studied in recent years, with a variety of successful applications (Karras et al. (2018) ; Li et al. (2017b) ; Lai et al. (2017) ; Zhu et al. (2017) ; BID13 ).
The idea of GANs is to jointly train a generator network that attempts to produce artificial samples, and a discriminator network or critic that distinguishes the generated samples from the real ones.
Compared to maximum likelihood based methods, GANs tend to produce samples with sharper and more vivid details but require more efforts to train.Recent studies on improving GAN training have mainly focused on designing loss functions, network architectures and training procedures.
The loss function, or simply loss, defines quantitatively the difference of discriminator outputs between real and generated samples.
The gradients of loss functions are used to train the generator and discriminator.
This study focuses on a loss function called maximum mean discrepancy (MMD), which is well known as the distance metric between two probability distributions and widely applied in kernel two-sample test BID8 ).
Theoretically, MMD reaches its global minimum zero if and only if the two distributions are equal.
Thus, MMD has been applied to compare the generated samples to real ones directly (Li et al. (2015) ; BID5 ) and extended as the loss function to the GAN framework recently (Unterthiner et al. (2018) ; Li et al. (2017a) ; ).In
this paper, we interpret the optimization of MMD loss by the discriminator as a combination of attraction and repulsion processes, similar to that of linear discriminant analysis. We
argue that the existing MMD loss may discourage the learning of fine details in data, as the discriminator attempts to minimize the within-group variance of its outputs for the real data. To
address this issue, we propose a repulsive loss for the discriminator that explicitly explores the differences among real data. The
proposed loss achieved significant improvements over the MMD loss on image generation tasks of four benchmark datasets, without incurring any additional computational cost. Furthermore
, a bounded Gaussian kernel is proposed to stabilize the training of discriminator. As such, using
a single kernel in MMD-GAN is sufficient, in contrast to a linear combination of kernels used in Li et al. (2017a) and . By using a single
kernel, the computational cost of the MMD loss can potentially be reduced in a variety of applications.The paper is organized as follows. Section 2 reviews
the GANs trained using the MMD loss (MMD-GAN) . We propose the repulsive
loss for discriminator in Section 3, introduce two practical techniques to stabilize the training process in Section 4, and present the results of extensive experiments in Section 5. In the last section, we
discuss the connections between our model and existing work.
This study extends the previous work on MMD-GAN (Li et al. (2017a) ) with two contributions.
First, we interpreted the optimization of MMD loss as a combination of attraction and repulsion processes, and proposed a repulsive loss for the discriminator that actively learns the difference among real data.
Second, we proposed a bounded Gaussian RBF (RBF-B) kernel to address the saturation issue.
Empirically, we observed that the repulsive loss may result in unstable training, due to factors including initialization (Appendix A.2), learning rate ( FIG7 and Lipschitz constraints on the discriminator (Appendix C.3).
The RBF-B kernel managed to stabilize the MMD-GAN training in many cases.
Tuning the hyper-parameters in RBF-B kernel or using other regularization methods may further improve our results.The theoretical advantages of MMD-GAN require the discriminator to be injective.
The proposed repulsive loss (Eq. 4) attempts to realize this by explicitly maximizing the pair-wise distances among the real samples.
Li et al. (2017a) achieved the injection property by using the discriminator as the encoder and an auxiliary network as the decoder to reconstruct the real and generated samples, which is more computationally extensive than our proposed approach.
On the other hand, ; imposed a Lipschitz constraint on the discriminator in MMD-GAN via gradient penalty, which may not necessarily promote an injective discriminator.The idea of repulsion on real sample scores is in line with existing studies.
It has been widely accepted that the quality of generated samples can be significantly improved by integrating labels (Odena et al. (2017); Miyato & Koyama (2018) ; Zhou et al. (2018) ) or even pseudo-labels generated by k-means method BID9 ) in the training of discriminator.
The reason may be that the labels help concentrate the data from the same class and separate those from different classes.
Using a pre-trained classifier may also help produce vivid image samples BID14 ) as the learned representations of the real samples in the hidden layers of the classifier tend to be well separated/organized and may produce more meaningful gradients to the generator.At last, we note that the proposed repulsive loss is orthogonal to the GAN studies on designing network structures and training procedures, and thus may be combined with a variety of novel techniques.
For example, the ResNet architecture BID11 ) has been reported to outperform the plain DCGAN used in our experiments on image generation tasks (Miyato et al. (2018) ; BID10 ) and self-attention module may further improve the results (Zhang et al. (2018) ).
On the other hand, Karras et al. (2018) proposed to progressively grows the size of both discriminator and generator and achieved the state-of-the-art performance on unsupervised training of GANs on the CIFAR-10 dataset.
Future work may explore these directions.
This section shows that constant discriminator output DISPLAYFORM0 may have no discrimination power.
First, we make the following assumptions:Assumption
3.
1. D is a multilayer perceptron where each layer l can be factorized into an affine transform and an element-wise activation function f l .
2. Each activation function f l ∈ C 0 ; furthermore, f l has a finite number of discontinuities and f l ∈ C 06
. 3. Input data to D is continuous and its support S is compact in R d with non-zero measure in each dimension and d > 1 7 .Based on Assumption 3, we have the following proposition:Proposition
2. If ∀x ∈ S, D(x) = c, where c is constant, then there always exists distortion δx such that x + δx ∈ S and D(x + δx) = c. | Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:632 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks have shown incredible performance for inference tasks in a variety of domains.
Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service (DLaaS) and use for on-device augmented intelligence.
This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks (with synaptic weights drawn from discrete sets), to perform inference without full decompression.
The basic insight that allows less rate than naive approaches is the recognition that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation and that the inference operation depends locally on the edges directly connected to it.
We also provide experimental results of our approach on the MNIST dataset.
Deep learning has achieved incredible performance for inference tasks such as speech recognition, image recognition, and natural language processing.
Most current deep neural networks, however, are enormous cloud-based structures that are too large and too complex to perform fast, energyefficient inference on device or for scaling deep learning as a service (DLaaS).
Compression, with the capability of providing inference without full decompression, is important.
Universal source coding for feedforward deep networks having synaptic weights drawn from finite sets that essentially achieve the entropy lower bound were introduced in BID0 .
Here, we provide-for the first time-an algorithm that directly uses these compressed representations for inference tasks without complete decompression.
Structures that can represent information near the entropy bound while also allowing efficient operations on them are called succinct structures (2; 3; 4).
Thus, we provide a succinct structure for feedforward neural networks, which may fit on-device and enable scaling of DLaaS.Related Work: There has been recent interest in compact representations of neural networks (5; 6; 7; 8; 9; 10; 11; 12; 13; 14) .
While most of these algorithms are lossy, we provide an efficient lossless algorithm, which can be used on top of any lossy algorithm that quantizes or prunes network weights; prior work on lossless compression of neural networks either used Huffman coding in a way that did not exploit invariances or was not succinct and required full decompression for inference.
The proposed algorithm builds on the sublinear entropy-achieving representation in (1) but is the first time succinctness-the further ability to perform inference with negligible space needed for partial decompression-has been attempted or achieved.
Our inference algorithm is similar to arithmetic decoding and so computational performance is also governed by efficient implementations of arithmetic coding.
Efficient high-throughput implementations of arithmetic coding/decoding have been developed for video, e.g. as part of the H.264/AVC and HEVC standards (15; 16). | This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:633 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train.
One common way to tackle this issue has been to propose new formulations of the GAN objective.
Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training.
In this work, we cast GAN optimization problems in the general variational inequality framework.
Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs.
We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam.
Generative adversarial networks (GANs) BID12 ) form a generative modeling approach known for producing realistic natural images (Karras et al., 2018) as well as high quality super-resolution (Ledig et al., 2017) and style transfer (Zhu et al., 2017) .
Nevertheless, GANs are also known to be difficult to train, often displaying an unstable behavior BID11 .
Much recent work has tried to tackle these training difficulties, usually by proposing new formulations of the GAN objective (Nowozin et al., 2016; .
Each of these formulations can be understood as a two-player game, in the sense of game theory (Von Neumann and Morgenstern, 1944) , and can be addressed as a variational inequality problem (VIP) BID15 , a framework that encompasses traditional saddle point optimization algorithms (Korpelevich, 1976) .Solving
such GAN games is traditionally approached by running variants of stochastic gradient descent (SGD) initially developed for optimizing supervised neural network objectives. Yet it
is known that for some games (Goodfellow, 2016, §8. 2) SGD exhibits oscillatory behavior and fails to converge. This oscillatory
behavior, which does not arise from stochasticity, highlights a fundamental problem: while a direct application of basic gradient descent is an appropriate method for regular minimization problems, it is not a sound optimization algorithm for the kind of two-player games of GANs. This constitutes
a fundamental issue for GAN training, and calls for the use of more principled methods with more reassuring convergence guarantees.Contributions. We point out that
multi-player games can be cast as variational inequality problems (VIPs) and consequently the same applies to any GAN formulation posed as a minimax or non-zerosum game. We present two techniques
from this literature, namely averaging and extrapolation, widely used to solve VIPs but which have not been explored in the context of GANs before. 1 We extend standard GAN
training methods such as SGD or Adam into variants that incorporate these techniques (Alg. 4 is new). We also explain that the
oscillations of basic SGD for GAN training previously noticed BID11 can be explained by standard variational inequality optimization results and we illustrate how averaging and extrapolation can fix this issue.We introduce a technique, called extrapolation from the past, that only requires one gradient computation per update compared to extrapolation which requires to compute the gradient twice, rediscovering, with a VIP perspective, a particular case of optimistic mirror descent (Rakhlin and Sridharan, 2013) . We prove its convergence
for strongly monotone operators and in the stochastic VIP setting.Finally, we test these techniques in the context of GAN training. We observe a 4-6% improvement
over Miyato et al. (2018) on the inception score and the Fréchet inception distance on the CIFAR-10 dataset using a WGAN-GP BID14 ) and a ResNet generator.
We newly addressed GAN objectives in the framework of variational inequality.
We tapped into the optimization literature to provide more principled techniques to optimize such games.
We leveraged these techniques to develop practical optimization algorithms suitable for a wide range of GAN training objectives (including non-zero sum games and projections onto constraints).
We experimentally verified that this could yield better trained models, improving the previous state of the art.
The presented techniques address a fundamental problem in GAN training in a principled way, and are orthogonal to the design of new GAN architectures and objectives.
They are thus likely to be widely applicable, and benefit future development of GANs. | We cast GANs in the variational inequality framework and import techniques from this literature to optimize GANs better; we give algorithmic extensions and empirically test their performance for training GANs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:634 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones.
However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods.
In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks.
In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning (ARML) framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner.
As a result, the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
Learning quickly is the key characteristic of human intelligence, which remains a daunting problem in machine intelligence.
The mechanism of meta-learning is widely used to generalize and transfer prior knowledge learned from previous tasks to improve the effectiveness of learning on new tasks, which has benefited various applications, such as computer vision (Kang et al., 2019; , natural language processing (Gu et al., 2018; Lin et al., 2019) and social good (Zhang et al., 2019; Yao et al., 2019a) .
Most of existing meta-learning algorithms learn a globally shared meta-learner (e.g., parameter initialization (Finn et al., 2017; , meta-optimizer (Ravi & Larochelle, 2016) , metric space (Snell et al., 2017; Garcia & Bruna, 2017; Oreshkin et al., 2018) ).
However, globally shared meta-learners fail to handle tasks lying in different distributions, which is known as task heterogeneity (Vuorio et al., 2018; Yao et al., 2019b) .
Task heterogeneity has been regarded as one of the most challenging issues in meta-learning, and thus it is desirable to design meta-learning models that effectively optimize each of the heterogeneous tasks.
The key challenge to deal with task heterogeneity is how to customize globally shared meta-learner by using task-specific information?
Recently, a handful of works try to solve the problem by learning a task-specific representation for tailoring the transferred knowledge to each task (Oreshkin et al., 2018; Vuorio et al., 2018; Lee & Choi, 2018) .
However, the expressiveness of these methods is limited due to the impaired knowledge generalization between highly related tasks.
Recently, learning the underlying structure across tasks provides a more effective way for balancing the customization and generalization.
Representatively, Yao et al. propose a hierarchically structured meta-learning method to customize the globally shared knowledge to each cluster (Yao et al., 2019b) .
Nonetheless, the hierarchical clustering structure completely relies on the handcrafted design which needs to be tuned carefully and may lack the capability to capture complex relationships.
Hence, we are motivated to propose a framework to automatically extract underlying relational structures from historical tasks and leverage those relational structures to facilitate knowledge customization on a new task.
This inspiration comes from the way of structuring knowledge in knowledge bases (i.e., knowledge graphs).
In knowledge bases, the underlying relational structures across text entities are automatically constructed and applied to a new query to improve the searching efficiency.
In the meta-learning problem, similarly, we aim at automatically establishing the metaknowledge graph between prior knowledge learned from previous tasks.
When a new task arrives, it queries the meta-knowledge graph and quickly attends to the most relevant entities (vertices), and then takes advantage of the relational knowledge structures between them to boost the learning effectiveness with the limited training data.
The proposed meta-learning framework is named as Automated Relational Meta-Learning (ARML).
Specifically, the ARML automatically builds the meta-knowledge graph from meta-training tasks to memorize and organize learned knowledge from historical tasks, where each vertex represents one type of meta-knowledge (e.g., the common contour between birds and aircrafts).
To learn the meta-knowledge graph at meta-training time, for each task, we construct a prototype-based relational graph for each class, where each vertex represents one prototype.
The prototype-based relational graph not only captures the underlying relationship behind samples, but alleviates the potential effects of abnormal samples.
The meta-knowledge graph is then learned by summarizing the information from the corresponding prototype-based relational graphs of meta-training tasks.
After constructing the meta-knowledge graph, when a new task comes in, the prototype-based relational graph of the new task taps into the meta-knowledge graph for acquiring the most relevant knowledge, which further enhances the task representation and facilitates its training process.
Our major contributions of the proposed ARML are three-fold: (1) it automatically constructs the meta-knowledge graph to facilitate learning a new task; (2) it empirically outperforms the state-ofthe-art meta-learning algorithms; (3) the meta-knowledge graph well captures the relationship among tasks and improves the interpretability of meta-learning algorithms.
In this paper, to improve the effectiveness of meta-learning for handling heterogeneous task, we propose a new framework called ARML, which automatically extract relation across tasks and construct a meta-knowledge graph.
When a new task comes in, it can quickly find the most relevant relations through the meta-knowledge graph and use this knowledge to facilitate its training process.
The experiments demonstrate the effectiveness of our proposed algorithm.
In the future, we plan to investigate the problem in the following directions: (1) we are interested to investigate the more explainable semantic meaning in the meta-knowledge graph on this problem; (2) Figure 3 : Interpretation of meta-knowledge graph on Art-Multi dataset.
For each subdataset, we randomly select one task from them.
In the left, we show the similarity heatmap between prototypes (P0-P5) and meta-knowledge vertices (V0-V7).
In the right part, we show the meta-knowledge graph.
we plan to extend the ARML to the continual learning scenario where the structure of meta-knowledge graph will change over time; (3) our proposed model focuses on tasks where the feature space, the label space are shared.
We plan to explore the relational structure on tasks with different feature and label spaces.
In this dataset, we use pencil and blur filers to change the task distribution.
To investigate the effect of pencil and blur filters, we provide one example in Figure 4 .
We can observe that different filters result in different data distributions.
All used filter are provided by OpenCV 1 . | Addressing task heterogeneity problem in meta-learning by introducing meta-knowledge graph | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:635 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, a deep boosting algorithm is developed to
learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts)
with diverse capabilities, e.g., these base deep CNNs are
sequentially trained to recognize a set of
object classes in an easy-to-hard way according to their
learning complexities.
Our experimental results have demonstrated
that our deep boosting algorithm can significantly improve the
accuracy rates on large-scale visual recognition.
The rapid growth of computational powers of GPUs has provided good opportunities for us to develop scalable learning algorithms to leverage massive digital images to train more discriminative classifiers for large-scale visual recognition applications, and deep learning BID19 BID20 BID3 has demonstrated its outstanding performance because highly invariant and discriminant features and multi-way softmax classifier are learned jointly in an end-to-end fashion.Before deep learning becomes so popular, boosting has achieved good success on visual recognition BID21 .
By embedding multiple weak learners to construct an ensemble one, boosting BID15 can significantly improve the performance by sequentially training multiple weak learners with respect to a weighted error function which assigns larger weights to the samples misclassified by the previous weak learners.
Thus it is very attractive to invest whether boosting can be integrated with deep learning to achieve higher accuracy rates on large-scale visual recognition.By using neural networks to replace the traditional weak learners in the boosting frameworks, boosting of neural networks has received enough attentions BID23 BID10 BID7 BID9 .
All these existing deep boosting algorithms simply use the weighted error function (proposed by Adaboost (Schapire, 1999) ) to replace the softmax error function (used in deep learning ) that treats all the errors equally.
Because different object classes may have different learning complexities, it is more attractive to invest new deep boosting algorithm that can use different weights over various object classes rather than over different training samples.Motivated by this observation, a deep boosting algorithm is developed to generate more discriminative ensemble classifier by combining a set of base deep CNNs with diverse capabilities, e.g., all these base deep CNNs (base experts) are sequentially trained to recognize different subsets of object classes in an easy-to-hard way according to their learning complexities.
The rest of the paper is organized as: Section 2 briefly reviews the related work; Section 3 introduce our deep boosting algorithm; Section 4 reports our experimental results; and we conclude this paper at Section 5.
In this paper, we develop a deep boosting algorithm is to learn more discriminative ensemble classifier by combining a set of base experts with diverse capabilities.
The base experts are from the family of deep CNNs and they are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities.
As for the future network, we would like to investigate the performance of heterogeneous base deep networks from different families. | A deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:636 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a method for translating music across musical instruments and styles.
This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms.
Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training.
We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations.
We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.
Humans have always created music and replicated it -whether it is by singing, whistling, clapping, or, after some training, playing improvised or standard musical instruments.
This ability is not unique to us, and there are many other vocal mimicking species that are able to repeat music from hearing.
Music is also one of the first domains to be digitized and processed by modern computers and algorithms.
It is, therefore, somewhat surprising that in the core music task of mimicry, AI is still much inferior to biological systems.In this work, we present a novel way to produce convincing musical translation between instruments and styles.
For example 1 , we convert the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven.
Our ability builds upon two technologies that have recently become available:
(i) the ability to synthesize high quality audio using autoregressive models, and
(ii) the recent advent of methods that transform between domains in an unsupervised way.
The first technology allows us to generate high quality and realistic audio and thanks to the teacher forcing technique, autoregressive models are efficiently trained as decoders.
The second family of technologies contributes to the practicality of the solution, since posing the learning problem in the supervised setting, would require a parallel dataset of different musical instruments.In our architecture, we employ a single, universal, encoder and apply it to all inputs (universal here means that a single encoder can address all input music, allowing us to achieve capabilities that are known as universal translation).
In addition to the advantage of training fewer networks, this also enables us to convert from musical domains that were not heard during training to any of the domains encountered.The key to being able to train a single encoder architecture, is making sure that the domain-specific information is not encoded.
We do this using a domain confusion network that provides an adversarial signal to the encoder.
In addition, it is important for the encoder not to memorize the input signal but to encode it in a semantic way.
We achieve this by distorting the input audio by random local pitch modulation.
During training, the network is trained as a denoising autoencoder, which recovers the undistorted version of the original input.
Since the distorted input is no longer in the musical domain of the output, the network learns to project out-of-domain inputs to the desired output domain.
In addition, the network no longer benefits from memorizing the input signal and employs a higher-level encoding.Asked to convert one musical instrument to another, our network shows a level of performance that seems to approach that of musicians.
When controlling for audio quality, which is still lower for generated music, it is many times hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument.
The network is also able to successfully process unseen musical instruments such as drums, or other sources, such as whistles.
Our work demonstrates capabilities in music conversion, which is a high-level task (a terminology that means that they are more semantic than low-level audio processing tasks), and could open the door to other high-level tasks, such as composition.
We have initial results that we find interesting: by reducing the size of the latent space, the decoders become more "creative" and produce outputs that are natural yet novel, in the sense that the exact association with the original input is lost. | An automatic method for converting music between instruments and styles | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:637 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Most existing defenses against adversarial attacks only consider robustness to L_p-bounded distortions.
In reality, the specific attack is rarely known in advance and adversaries are free to modify images in ways which lie outside any fixed distortion model; for example, adversarial rotations lie outside the set of L_p-bounded distortions.
In this work, we advocate measuring robustness against a much broader range of unforeseen attacks, attacks whose precise form is unknown during defense design.
We propose several new attacks and a methodology for evaluating a defense against a diverse range of unforeseen distortions.
First, we construct novel adversarial JPEG, Fog, Gabor, and Snow distortions to simulate more diverse adversaries.
We then introduce UAR, a summary metric that measures the robustness of a defense against a given distortion.
Using UAR to assess robustness against existing and novel attacks, we perform an extensive study of adversarial robustness.
We find that evaluation against existing L_p attacks yields redundant information which does not generalize to other attacks; we instead recommend evaluating against our significantly more diverse set of attacks.
We further find that adversarial training against either one or multiple distortions fails to confer robustness to attacks with other distortion types.
These results underscore the need to evaluate and study robustness against unforeseen distortions.
Neural networks perform well on many benchmark tasks (He et al., 2016) yet can be fooled by adversarial examples (Goodfellow et al., 2014) or inputs designed to subvert a given model.
Adversaries are usually assumed to be constrained by an L ∞ budget (Goodfellow et al., 2014; Madry et al., 2017; Xie et al., 2018) , while other modifications such as adversarial geometric transformations, patches, and even 3D-printed objects have also been considered (Engstrom et al., 2017; Brown et al., 2017; Athalye et al., 2017) .
However, most work on adversarial robustness assumes that the adversary is fixed and known in advance.
Defenses against adversarial attacks are often constructed in view of this specific assumption (Madry et al., 2017) .
In practice, adversaries can modify and adapt their attacks so that they are unforeseen.
In this work, we propose novel attacks which enable the diverse assessment of robustness to unforeseen attacks.
Our attacks are varied ( §2) and qualitatively distinct from current attacks.
We propose adversarial JPEG, Fog, Gabor, and Snow attacks (sample images in Figure 1 ).
We propose an unforeseen attack evaluation methodology ( §3) that involves evaluating a defense against a diverse set of held-out distortions decoupled from the defense design.
For a fixed, held-out distortion, we then evaluate the defense against the distortion for a calibrated range of distortion sizes whose strength is roughly comparable across distortions.
For each fixed distortion, we summarize the robustness of a defense against that distortion relative to a model adversarially trained on that distortion, a measure we call UAR.
We provide code and calibrations to easily evaluate a defense against our suite of attacks at https://github.com/iclr-2020-submission/ advex-uar.
By applying our method to 87 adversarially trained models and 8 different distortion types ( §4), we find that existing defenses and evaluation practices have marked weaknesses.
Our results show
New Attacks JPEG Fog Gabor Snow
Figure 1: Attacked images (label "espresso maker") against adversarially trained models with large ε.
Each of the adversarial images above are optimized to maximize the classification loss.
that existing defenses based on adversarial training do not generalize to unforeseen adversaries, even when restricted to the 8 distortions in Figure 1 .
This adds to the mounting evidence that achieving robustness against a single distortion type is insufficient to impart robustness to unforeseen attacks (Jacobsen et al., 2019; Jordan et al., 2019; Tramèr & Boneh, 2019) .
Turning to evaluation, our results demonstrate that accuracy against different L p distortions is highly correlated relative to the other distortions we consider.
This suggest that the common practice of evaluating only against L p distortions to test a model's adversarial robustness can give a misleading account.
Our analysis demonstrates that our full suite of attacks adds substantive attack diversity and gives a more complete picture of a model's robustness to unforeseen attacks.
A natural approach is to defend against multiple distortion types simultaneously in the hope that seeing a larger space of distortions provides greater transfer to unforeseen distortions.
Unfortunately, we find that defending against even two different distortion types via joint adversarial training is difficult ( §5).
Specifically, joint adversarial training leads to overfitting at moderate distortion sizes.
In summary, we propose a metric UAR to assess robustness of defenses against unforeseen adversaries.
We introduce a total of 4 novel attacks.
We apply UAR to assess how robustness transfers to existing attacks and our novel attacks.
Our results demonstrate that existing defense and evaluation methods do not generalize well to unforeseen attacks.
We have seen that robustness to one attack provides limited information about robustness to other attacks, and moreover that adversarial training provides limited robustness to unforeseen attacks.
These results suggest a need to modify or move beyond adversarial training.
While joint adversarial training is one possible alternative, our results show it often leads to overfitting.
Even ignoring this, it is not clear that joint training would confer robustness to attacks outside of those trained against.
Evaluating robustness has proven difficult, necessitating detailed study of best practices even for a single fixed attack (Papernot et al., 2017; Athalye et al., 2018) .
We build on these best practices by showing how to choose and calibrate a diverse set of unforeseen attacks.
Our work is a supplement to existing practices, not a replacement-we strongly recommend following the guidelines in Papernot et al. (2017) and Athalye et al. (2018) in addition to our recommendations.
Some caution is necessary when interpreting specific numeric results in our paper.
Many previous implementations of adversarial training fell prone to gradient masking (Papernot et al., 2017; Engstrom et al., 2018) , with apparently successful training occurring only recently (Madry et al., 2017; Xie et al., 2018) .
While evaluating with moderately many PGD steps (200) helps guard against this, (Qian & Wegman, 2019) shows that an L ∞ -trained model that appeared robust against L 2 actually had substantially less robustness when evaluating with 10 6 PGD steps.
If this effect is pervasive, then there may be even less transfer between attacks than our current results suggest.
For evaluating against a fixed attack, DeepFool Moosavi-Dezfooli et al. (2015) and CLEVER Weng et al. (2018) can be seen as existing alternatives to UAR.
They work by estimating "empirical robustness", which is the expected minimum ε needed to successfully attack an image.
However, these apply only to attacks which optimize over an L p -ball of radius ε, and CLEVER can be susceptible to gradient masking Goodfellow (2018).
In addition, empirical robustness is equivalent to linearly averaging accuracy over ε, which has smaller dynamic range than the geometric average in UAR.
Our results add to a growing line of evidence that evaluating against a single known attack type provides a misleading picture of the robustness of a model (Sharma & Chen, 2017; Engstrom et al., 2017; Jordan et al., 2019; Tramèr & Boneh, 2019; Jacobsen et al., 2019) .
Going one step further, we believe that robustness itself provides only a narrow window into model behavior; in addition to robustness, we should seek to build a diverse toolbox for understanding machine learning models, including visualization (Olah et al., 2018; Zhang & Zhu, 2019) , disentanglement of relevant features (Geirhos et al., 2018) , and measurement of extrapolation to different datasets (Torralba & Efros, 2011) or the long tail of natural but unusual inputs (Hendrycks et al., 2019) .
Together, these windows into model behavior can give us a clearer picture of how to make models reliable in the real world. | We propose several new attacks and a methodology to measure robustness against unforeseen adversarial attacks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:638 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks (DNNs) have witnessed as a powerful approach in this year by solving long-standing Artificial
intelligence (AI) supervised and unsupervised tasks exists in natural language processing, speech processing, computer vision and others.
In this paper, we attempt to apply DNNs on three different cyber security use cases: Android malware classification, incident detection and fraud detection.
The data set of each use case contains real known benign and malicious activities samples.
These use cases are part of Cybersecurity Data Mining Competition (CDMC) 2017.
The efficient network architecture for DNNs is chosen by conducting various trails of experiments for network parameters and network structures.
The experiments of such chosen efficient configurations of DNNs are run up to 1000 epochs with learning rate set in the range [0.01-0.5].
Experiments of DNNs performed well in comparison to the classical machine learning algorithm in all cases of experiments of cyber security use cases.
This is due to the fact that DNNs implicitly extract and build better features, identifies the characteristics of the data that lead to better accuracy.
The best accuracy obtained by DNNs and XGBoost on Android malware classification 0.940 and 0.741, incident detection 1.00 and 0.997, and fraud detection 0.972 and 0.916 respectively.
The accuracy obtained by DNNs varies -0.05%, +0.02%, -0.01% from the top scored system in CDMC 2017 tasks.
In this era of technical modernization, explosion of new opportunities and efficient potential resources for organizations have emerged but at the same time these technologies have resulted in threats to the economy.
In such a scenario proper security measures plays a major role.
Now days, hacking has become a common practice in organizations in order to steal data and information.
This highlights the need for an efficient system to detect and prevent the fraudulent activities.
cyber security is all about the protection of systems, networks and data in the cyberspace.Malware remains one of the maximum enormous security threats on the Internet.
Malware are the softwares which indicate malicious activity of the file or programs.
These are unwanted programs since they cause harm to the intended use of the system by making it behave in a very different manner than it is supposed to behave.
Solutions with Antivirus and blacklists are used as the primary weapons of resistance against these malwares.
Both approaches are not effective.
This can only be used as an initial shelter in real time malware detection system.
This is primarily due to the fact that both approaches completely fails in detecting the new malware that is created using polymorphic, metamorphic, domain flux and IP flux.Machine learning algorithms have played a pivotal role in several use cases of cyber security BID0 .
Fortunately, deep learning approaches are prevailing subject in recent days due to the remarkable performance in various long-standing artificial intelligence (AI) supervised and unsupervised challenges BID1 .
This paper evaluates the effectiveness of deep neural networks (DNNs) for cyber security use cases: Android malware classification, incident detection and fraud detection.The paper is structured as follows.
Section II discusses the related work.
Section III discusses the background knowledge of deep neural networks (DNNs).
Section IV presents the proposed methodology including the description of the data set.
Results are displayed in Section V. Conclusion is placed in Section VI.
This paper has evaluated the performance of deep neural networks (DNNs) for cyber security uses cases: Android malware classification, incident detection and fraud detection.
Additionally, other classical machine learning classifier is used.
In all cases, the performance of DNNs is good in comparison to the classical machine learning classifier.
Moreover, the same architecture is able to perform better than the other classical machine learning classifier in all use cases.
The reported results of DNNs can be further improved by promoting training or stacking a few more layer to the existing architectures.
This will be remained as one of the direction towards the future work. | Deep-Net: Deep Neural Network for Cyber Security Use Cases | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:639 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially
Autonomous flying drones grabbed a lot of attention in Artificial Intelligence.
Since electronic technology is getting smaller, cheaper and more efficient, huge
advancement in the study of UAVs has been observed recently.
From monitoring
floods, discerning the spread of algae in water bodies to detecting forest trail, their
application is far and wide.
Our work is mainly focused on autonomous flying
drones where we establish a case study towards efficiency, robustness and accuracy
of UAVs where we showed our results well supported through experiments.
We provide details of the software and hardware architecture used in the study.
We
further discuss about our implementation algorithms and present experiments that
provide a comparison between three different state-of-the-art algorithms namely
TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power
consumption and inference time.
In our study, we have shown that MobileNet has
produced better results with very less computational requirement and power consumption.
We have also reported the challenges we have faced during our work
as well as a brief discussion on our future work to improve safety features and
performance.
In modern era, UAVs have become very popular and have basic intelligence of being driven autonomously.
Talking about ground traffic, these vehicles have limitations of physical paths and barriers.
However, such is not the case with flying objects like drones as they do not suffer from such physical limitations.
Autonomous Flying objects are in much of discussion these days and are striding all across the realm -traffic monitoring BID9 , agriculture BID11 , inventory management BID4 , surveillance BID15 , data mining, disaster response BID10 , etc.
As their areas of application increase, it becomes more important to find algorithms well suited for these kind of vehicles.
Some applications may not require the drone to be extremely accurate, but may require it to work for longer durations e.g. in surveillance applications while others may require it to be very precise but may not require it to work for long duration e.g. in delivery of items.In the last decade, significant changes have been observed in the field of autonomous motion planning of vehicles, UAVs in particular.
The motion planning of UAVs are distinctly difficult because of several complexities that comes with aerial vehicles.
The salience of differential constraints, uncertainty in the vehicle state and limited knowledge about the environment makes it impossible to have a precise pre-computed plan to follow through.
These differences considerably gave rise to various approaches and techniques for planning the motion of these unmanned autonomous vehicles.
The ambiguity of different algorithms and its inherent adversity pose an intriguing scope for a bench-marking study to enhance accuracy, robustness, power consumption, safety and inference time and fine tuning it further.Throughout this paper, we bear in mind some of the generic characteristics and prerequisites relating to UAVs.
The basic design of a UAV is modelled to have acceleration and velocity constraints.
Furthermore, the higher-order differential constraints also associate themselves with the equation of motion of a drone.
However, the coherent objective involved in all UAVs is to guide the vehicle towards a goal.
In this paper, we introduce to the best of our knowledge, a very first comparative study of three algorithms in order to find a better motion control of a drone for detecting a trail.In order to be able to compare a set of algorithms in a meticulous way, it is necessary to establish their precision and robustness and to evaluate its power consumption as well as inference time.
Along with these metrics established for a particular algorithm, it is also necessary to consider the distinct areas of its application.
Only then, based on the requirements called for by a particular application, a reasonable opinion about an algorithm is to be formed.
Our study covers recent developments and algorithms used in the area of trail detection by UAVs and runs down as comprehensively as possible what has been already upheld regarding these algorithms.
While producing this work, we have encountered several challenges.
Few of these challenges are listed below:1.
The major challenge encountered was to run our DNN models on the physical drone in real time due to a hardware bug we were facing with the FCU.2.
We could not train our models from scratch due to lack of significant amount of dataset.
Additionally, we handled a lot of issues to make our models more stable and robust.
Since in each trial, the number of images in each class (left, straight and right) were different, there was a lot of data imbalance which we solved by upsampling and downsampling the dataset.3.
Due to the large number of training parameters at the beginning, our models were overfitted.
We eliminated over-fitting by introducing several data augmentation techniques (random flipping, random rotation, random contrast and transition etc. ).
We further included regularization (especially dropout layers) in order to reduce network complexity.4.
Power is one of the important factors especially in mobile embedded devices with small size and computational power.
Typically, deep learning algorithms consume more power specifically for the real time inference.
We have made an estimate of the power consumption of each of our model by calculating the GPU power drawn by them but we could not test how long our drone would run implementing each of these models due to the hardware bug mentioned before.
In this paper, we have presented a comparison between 3 algorithms -TrailNet, InceptionResnet and MobileNet in terms of accuracy, computational cost, power consumption, inference time and robustness.
The choice of algorithm for UAVs varies on the basis of several factors.
In our work, we have worked with some of the factors which we thought would be pivotal in algorithm selection considering reasonable comparisons.
We observed in our study that MobileNet outperformed others with very less computational requirement and power consumption.
Hence in our opinion, MobileNet is more befitting for drones and other embedded devices compared to TrailNet and InceptionResnet.Safety is another major concern in terms of drones.
There can be many situations such as collision with the objects, external disturbances like winds, chances of drone moving out of manual controller zone, battery issues, chances of getting stolen and other safety hazards.
We will be implementing these drone related safety features in our future work. | case study on optimal deep learning model for UAVs | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:64 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations.
Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives.
On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks.
Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration.
Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives.
We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration.
This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing.
We have seen impressive progress over the recent years in learning based approaches to perform a plethora of manipulation tasks Andrychowicz et al., 2018; Pinto & Gupta, 2016; Agrawal et al., 2016) .
However, these systems are typically task-centric savants -able to only execute a single task that they were trained for.
This is because these systems, whether leveraging demonstrations or environmental rewards, attempt to learn each task tabula rasa, where low to high level motor behaviours, are all acquired from scratch in context of the specified task.
In contrast, we humans are adept at a variety of basic manipulation skills e.g. picking, pushing, grasping etc., and can effortlessly perform these diverse tasks via a unified manipulation system.
Sample motor programs that emerge by discovering the space of motor programs from a diverse set of robot demonstration data in an unsupervised manner.
These motor programs facilitate understanding the commonalities across various demonstrations, and accelerate learning for downstream tasks.
How can we step-away from the paradigm of learning task-centric savants, and move towards building similar unified manipulation systems?
We can begin by not treating these tasks independently, but via instead exploiting the commonalities across them.
One such commonality relates to the primitive actions executed to accomplish the tasks -while the high-level semantics of tasks may differ significantly, the low and mid-level motor programs across them are often shared e.g. to either pick or push an object, one must move the hand towards it.
This concept of motor programs can be traced back to the work of Lashley, who noted that human motor movements consist of 'orderly sequences' that are not simply sequences of stimulus-response patterns.
The term 'motor programs' is however better attributed to Keele (1968) as being representative of 'muscle commands that execute a movement sequence uninfluenced by peripheral feedback', though later works shifted the focus from muscle commands to the movement itself, while allowing for some feedback (Adams, 1971) .
More directly relevant to our motivation is Schmidt's notion of 'generalized' motor programs (Schmidt, 1975) that can allow abstracting a class of movement patterns instead of a singular one.
In this work, we present an approach to discover the shared space of (generalized) motor programs underlying a variety of tasks, and show that elements from this space can be composed to accomplish diverse tasks.
Not only does this allow understanding the commonalities and shared structure across diverse skills, the discovered space of motor programs can provide a high-level abstraction using which new skills can be acquired quickly by simply learning the set of desired motor programs to compose.
We are not the first to advocate the use of such mid-level primitives for efficient learning or generalization, and there have been several reincarnations of this idea over the decades, from 'operators' in the classical STRIPS algorithm (Fikes & Nilsson, 1971) , to 'options' (Sutton et al., 1999) or 'primitives' (Schaal et al., 2005) in modern usage.
These previous approaches however assume a set of manually defined/programmed primitives and therefore bypass the difficulty of discovering them.
While some attempts have been made to simultaneously learn the desired skill and the underlying primitives, learning both from scratch is difficult, and are therefore restricted to narrow tasks.
Towards overcoming this difficulty, we observe that instead of learning the primitives from scratch in the context of a specific task, we can instead discover them using demonstrations of a diverse set of tasks.
Concretely, by leveraging demonstrations for different skills e.g. pouring, grasping, opening etc., we discover the motor programs (or movement primitives) that occur across these.
We present an approach to discover movement primitives from a set of unstructured robot demonstration i.e. demonstrations without additional parsing or segmentation labels available.
This is a challenging task as each demonstration is composed of a varying number of unknown primitives, and therefore the process of learning entails both, learning the space of primitives as well as understanding the available demonstrations in context of these.
Our approach is based on the insight that an abstraction of a demonstrations into a sequence of motor programs or primitives, each of which correspond to an implied movement sequence, and must yield back the demonstration when the inferred primitives are 'recomposed'.
We build on this and formulate an unsupervised approach to jointly learn the space of movement primitives, as well as a parsing of the available demonstrations into a high-level sequence of these primitives.
We demonstrate that our method allows us to learn a primitive space that captures the shared motions required across diverse skills, and that these motor programs can be adapted and composed to further perform specific tasks.
Furthermore, we show that these motor programs are semantically meaningful, and can be recombined to solved robotic tasks using reinforcement learning.
Specifically, solving reaching and pushing tasks with reinforcement learning over the space of primitives achieves 2 orders of magnitude faster training than reinforcement learning in the low-level control space.
We have presented an unsupervised approach to discover motor programs from a set of unstructured robot demonstrations.
Through the insight that learned motor programs should recompose into the original demonstration while being simplistic, we discover a coherent and diverse latent space of primitives on the MIME (Sharma et al., 2018) dataset.
We also observed that the learned primitives were semantically meaningful, and useful for efficiently learning downstream tasks in simulation.
We hope that the contributions from our work enable learning and executing primitives in a plethora of real-world robotic tasks.
It would also be interesting to leverage the learned motor programs in context of continual learning, to investigate how the discovered space can be adapted and expanded in context of novel robotic tasks. | We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:640 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data.
However, labeling these large datasets can be both cumbersome and costly.
In this paper, we apply weak supervision to time series data, and programmatically label a dataset from sensors worn by patients with Parkinson's.
We then built a LSTM model that predicts when these patients exhibit clinically relevant freezing behavior (inability to make effective forward stepping).
We show that (1) when our model is trained using patient-specific data (prior sensor sessions), we come within 9% AUROC of a model trained using hand-labeled data and (2) when we assume no prior observations of subjects, our weakly supervised model matched performance with hand-labeled data.
These results demonstrate that weak supervision may help reduce the need to painstakingly hand label time series training data.
Time series data generated by wearable sensors are an increasingly common source of biomedical data.
With their ability to monitor events in non-laboratory conditions, sensors offer new insights into human health across a diverse range of applications, including continuous glucose monitoring BID1 , atrial fibrillation detection BID11 , fall detection BID2 , and general human movement monitoring BID6 .Supervised
machine learning with sensor time series data can help automate many of these monitoring tasks and enable medical professionals make more informed decisions. However, developing
these supervised models is challenging due to the cost and difficultly in obtaining labeled training data, especially in settings with considerable inter-subject variability, as is common in human movement research BID5 . Traditionally, medical
professionals must hand label events observed in controlled laboratory settings. When the events of interest
are rare this process is time consuming, expensive, and does not scale to the sizes needed to train robust machine learning models. Thus there is a need to efficiently
label the large amounts of data that machine learning algorithms require for time series tasks.In this work, we explore weakly supervised BID10 ) models for time series classification. Instead of using manually labeled training
data, weak supervision encodes domain insights into the form of heuristic labeling functions, which are used to create large, probabilistically labeled training sets. This method is especially useful for time
series classification, where the sheer number of data points makes manual labeling difficult.As a motivating test case, we focus on training a deep learning model to classify freezing behaviors in people with Parkinson's disease. We hypothesize that by encoding biomechanical
knowledge about human movement and Parkinson's BID5 into our weakly supervised model, we can reduce the need for large amounts of hand labeled data and achieve similar performance to fully supervised models for classifying freezing behavior. We focus on two typical clinical use cases when
making predictions for a patient: (1) where we have no prior observations of the patient, and (2) where we have at least one observation of the patient.
Our work demonstrates the potential of weak supervision on time series tasks.
In both experiments, our weakly supervised models performed close to or match the fully supervised models.
Further, the amount of data available for the weak supervision task was fairly small -with more unlabeled data, we expect to be able to improve performance BID9 .
These results show that costly and time-intensive hand labeling may not be required to get the desired performance of a given classifier.In the future, we plan to add more and different types of sensor streams and modalities (e.g., video).
We also plan to use labeling functions to better model the temporal correlation between individual segments of these streams, which can potentially improve our generative model and hence end to end performance. | We demonstrate the feasibility of a weakly supervised time series classification approach for wearable sensor data. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:641 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning semantic correspondence between the structured data (e.g., slot-value pairs) and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation.
Recent neural generation methods require to use large scale training data.
However, the collected data-text pairs for training are usually loosely corresponded, where texts contain additional or contradicted information compare to its paired input.
In this paper, we propose a local-to-global alignment (L2GA) framework to learn semantic correspondences from loosely related data-text pairs.
First, a local alignment model based on multi-instance learning is applied to build the semantic correspondences within a data-text pair.
Then, a global alignment model built on top of a memory guided conditional random field (CRF) layer is designed to exploit dependencies among alignments in the entire training corpus, where the memory is used to integrate the alignment clues provided by the local alignment model.
Therefore, it is capable of inducing missing alignments for text spans that are not supported by its imperfect paired input.
Experiments on recent restaurant dataset show that our proposed method can improve the alignment accuracy and as a by product, our method is also applicable to induce semantically equivalent training data-text pairs for neural generation models.
Learning semantic correspondences between the structured data (e.g., slot-values pairs in a meaning representation (MR)) and associated description texts is one of core problem in NLP community (Barzilay & Lapata, 2005) , e.g., data-to-text generation produces texts based on the learned semantic correspondences.
Recent data-to-text generation methods, especially neural-base methods which are data-hungry, adopt data-text pairs collected from web for training.
Such collected corpus usually contain loosely corresponded data text pairs (Perez-Beltrachini & Gardent, 2017; Nie et al., 2019) , where text spans contain information that are not supported by its imperfect structured input.
Figure 1 depicts an example, where the slot-value pair Price=Cheap can be aligned to text span low price range while the text span restaurant doesn't supported by any slot-value pair in paired input MR. Most of previous work for learning semantic correspondences (Barzilay & Lapata, 2005; Liang et al., 2009; Kim & Mooney, 2010; Perez-Beltrachini & Lapata, 2018) focus on characterizing local interactions between every text span with a corresponded slots presented in its paired MR. Such methods cannot work directly on loosely corresponded data-text pairs, as setting is different.
In this work, we make a step towards explicit semantic correspondences (i.e., alignments) in loosely corresponded data text pairs.
Compared with traditional setting, which only attempts inducing alignments for every text span with a corresponded slot presented in its paired MR. We propose a Local-to-Global Alignment (L2GA) framework, where the local alignment model discovers the correspondences within a single data-text pair (e.g., low price range is aligned with the slot Price in Figure 1 ) and a global alignment model exploits dependencies among alignments presented in the entire data-text pairs and therefore, is able to induce missing attributes for text spans not supported in its noisy input data (e.g., restaurant is aligned with the slot EatType in Figure 1 ).
Specially, our proposed L2GA is composed of two parts.
The local alignment model is a neural method optimized via a multi-instance learning paradigm (Perez-Beltrachini & Lapata, 2018) which automatically captures correspondences by maximizing the similarities between co-occurred slots and texts within a data-text pair.
Our proposed global alignment model is a memory guided conditional random field (CRF) based sequence labeling framework.
The CRF layer is able to learn dependencies among semantic labels over the entire corpus and therefore is suitable for inferring missing alignments of unsupported text spans.
However, since there are no semantic labels provided for sequence labeling, we can only leverage limited supervision provided in a data-text pair.
We start by generating pseudo labels using string matching heuristic between words and slots (e.g., Golden Palace is aligned with Name in Figure 1 ).
The pseudo labels result in large portion of unmatched text spans (e.g., low price and restaurant cannot be directly matched in Figure 1 ), we tackle this challenge by:
a) changing the calculation of prediction probability in CRF layer, where we sum probabilities over possible label sequences for unmatched text spans to allow inference on unmatched words;
b) incorporating alignment results produced by the local alignment model as an additional memory to guide the CRF layer, therefore, the semantic correspondences captured by local alignment model can together work with the CRF layer to induce alignments locally and globally.
We conduct experiments of our proposed method on a recent restaurant dataset, E2E challenge benchmark (Novikova et al., 2017a) , results show that our framework can improve the alignment accuracy with respect to previous methods.
Moreover, our proposed method can explicitly detect unaligned errors presented in the original training corpus and provide semantically equivalent training data-text pairs for neural generation models.
Experimental results also show that our proposed method can improve content consistency for neural generation models.
In this paper, we study the problem of learning alignments in loosely related data-text pairs.
We propose a local-to-global framework which not only induces semantic correspondences for words that are related to its paired input but also infers potential labels for text spans that are not supported by its incomplete input.
We find that our proposed method improves the alignment accuracy, and can be of help to reduce the noise in original training corpus.
In the future, we will explore more challenging datasets with more complex data schema.
Under review as a conference paper at ICLR 2020 are 300 and 100 respectively.
The dimensions of trainable hidden units in LSTMs are all set to 400.
We first pre-train our local model for 5 epochs and then train our proposed local-to-global model jointly with 10 epochs according to validation set.
During training, we regularize all layers with a dropout rate of 0.1.
We use stochastic gradient descent (SGD) for optimisation with learning rate 0.015.
The gradient is truncated by 5. | We propose a local-to-global alignment framework to learn semantic correspondences from noisy data-text pairs with weak supervision | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:642 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods.
By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations.
In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations?
The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks.
In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches.
Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods.
Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached.
Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch.
We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.
Reinforcement learning (RL) algorithms hold the promise of providing a broadly-applicable tool for automating control, and the combination of high-capacity deep neural network models with RL extends their applicability to settings with complex observations and that require intricate policies.
However, RL with function approximation, including deep RL, presents a challenging optimization problem.
Despite years of research, current deep RL methods are far from a turnkey solution: most popular methods lack convergence guarantees (Baird, 1995; Tsitsiklis & Van Roy, 1997) or require prohibitive numbers of samples (Schulman et al., 2015; Lillicrap et al., 2015) .
Moreover, in practice, many commonly used algorithms are extremely sensitive to hyperparameters (Henderson et al., 2018) .
Besides the optimization challenges, another usability challenge of RL is reward function design: although RL automatically determines how to solve the task, the task itself must be specified in a form that the RL algorithm can interpret and optimize.
These challenges prompt us to consider whether there might exist a general method for learning behaviors without the need for complex, deep RL algorithms.
Imitation learning is an alternative paradigm to RL that provides a simple and straightforward approach for training control policies via standard supervised learning methods.
By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of RL.
Supervised learning algorithms in deep learning have matured to the point of being robust and reliable, and imitation learning algorithms have demonstrated success in acquiring behaviors robustly and reliably from high-dimensional sensory data such as images (Rajeswaran et al., 2017; Lynch et al., 2019) .
The catch is that imitation learning methods require an expert demonstrator -typically a human -to provide a number of demonstrations of optimal behavior.
Obtaining expert demonstrations can be challenging; the large number of demonstrations required limits the scalability of such algorithms.
In this paper, we ask: can we use ideas from imitation learning to train effective policies without any expert demonstrations, retaining the benefits of imitation learning, but making it possible to learn goal-directed behavior autonomously from scratch?
The key observation for making progress on this problem is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can serve as optimal examples for other tasks.
In particular, in the setting where the tasks correspond to reaching different goal states, every trajectory is a successful demonstration for the state that it actually reaches.
Similar observations have been made in prior works as well (Kaelbling, 1993; Andrychowicz et al., 2017; Nair et al., 2018; Mavrin et al., 2019; Savinov et al., 2018) , but have been used to motivate data reuse in off-policy RL or semiparametric methods.
Our approach will leverage this idea to obtain near-optimal goal-conditioned policies without RL or reward functions.
The algorithm that we study is, at its core, very simple: at each iteration, we run our latest goalconditioned policy, collect data, and then use this data to train a policy with supervised learning.
Supervision is obtained by noting that each action that is taken is a good action for reaching the states that actually occurred in future time steps along the same trajectory.
This algorithm resembles imitation learning, but is self-supervised.
This procedure combines the benefits of goal-conditioned policies with the simplicity of supervised learning, and we theoretically show that this algorithm corresponds to a convergent policy learning procedure.
While several prior works have proposed training goal-conditioned policies via imitation learning based on a superficially similar algorithm (Ding et al., 2019; Lynch et al., 2019) , to our knowledge no prior work proposes a complete policy learning algorithm based on this idea that learns from scratch, without expert demonstrations.
This procedure reaps the benefits of off-policy data re-use without the need for learning complex Q functions or value functions.
Moreover, we can bootstrap our algorithm with a small number of expert demonstrations, such that it can continue to improve its behavior self supervised, without dealing with the challenges of combining imitation learning with off-policy RL.
The main contribution of our work is a complete algorithm for learning policies from scratch via goal-conditioned imitation learning, and to show that this algorithm can successfully train goalconditioned policies.
Our theoretical analysis of self-supervised goal-conditioned imitation learning shows that this method optimizes a lower bound on the probability that the agent reaches the desired goal.
Empirically, we show that our proposed algorithm is able to learn goal reaching behaviors from scratch without the need for an explicit reward function or expert demonstrations. | Learning how to reach goals from scratch by using imitation learning with data relabeling | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:643 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks have recently shown excellent performance on numerous classi- fication tasks.
These networks often have a large number of parameters and thus require much data to train.
When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance.
To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification.
In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously.
This interactive training process continues iteratively until a stop criterion is met.
In the testing phase, the outputs of the two networks are combined to obtain one final score for classification.
Detailed analysis of the method is provided for an in-depth understanding of its mechanism.
Image classification is an important application of machine learning and data mining.
Recent years have witnessed tremendous improvement in large-scale image classification due to the advances of deep learning BID15 BID17 BID7 BID4 .
Despite recent breakthroughs in applying deep networks, one persistent challenge is classification with a small number of training data points BID12 .
Small-sample classification is important, not only because humans learn a concept of class without millions or billions of data but also because many kinds of real-world data have a small quantity.
Given a small number of training data points, a large network will inevitably encounter the overfitting problem, even when dropout BID16 and weight decay are applied during training BID19 .
This is mainly because a large network represents a large function space, in which many functions can fit a given small-sample dataset, making it difficult to find the underlying true function that is able to generalize well.
As a result, a neural network trained with a small number of data points usually exhibits a large variance.Ensemble learning is one way to reduce the variance.
According to bias-variance dilemma BID2 , there is a trade-off between the bias and variance contributions to estimation or classification errors.
The variance is reduced when multiple models or ensemble members are trained with different datasets and are combined for decision making, and the effect is more pronounced if ensemble members are accurate and diverse BID3 .There
exist two classic strategies of ensemble learning BID21 BID13 . The first
one is Bagging BID20 and variants thereof. This strategy
trains independent classifiers on bootstrap re-samples of training data and then combines classifiers based on some rules, e.g. weighted average. Bagging methods
attempt to obtain diversity by bootstrap sampling, i.e. random sampling with replacement. There is no guarantee
to find complementary ensemble members and new datasets constructed by bootstrap sampling will contain even fewer data points, which can potentially make the overfitting problem even more severe. The second strategy is
Boosting BID14 BID10 and its variants. This strategy starts from
a classifier trained on the available data and then sequentially trains new member classifiers. Taking Adaboost BID20 as
an example, a classifier in Adaboost is trained according to the training error rates of previous classifiers. Adaboost works well for
weak base classifiers. If the base classifier
is of high complexity, such as a large neural network, the first base learner will overfit the training data. Consequently, either the
Adaboost procedure is stopped or the second classifier has to be trained on data with original weights, i.e. to start from the scratch again, which in no way is able to ensure the diversity of base networks.In addition, there also exist some "implicit" ensemble methods in the area of neural networks. Dropout BID16 , DropConnect
BID18 and Stochastic Depth techniques BID5 create an ensemble by dropping some hidden nodes, connections (weights) and layers, respectively. Snapshot Ensembling BID6 )
is a method that is able to, by training only one time and finding multiple local minima of objective function, get many ensemble members, and then combines these members to get a final decision. Temporal ensembling, a parallel
work to Snapshot Ensembling, trains on a single network, but the predictions made on different epochs correspond to an ensemble prediction of multiple sub-networks because of dropout regularization BID8 . These works have demonstrated advantages
of using an ensemble technique. In these existing "implicit" ensemble methods
, however, achieving diversity is left to randomness, making them ineffective for small-sample classification.Therefore, there is a need for new ensemble learning methods able to train diverse and complementary neural networks for small-sample classification. In this paper, we propose a new ensemble method
called InterBoost for training two base neural networks with the same structure. In the method, the original dataset is first re-weighted
by two sets of complementary weights. Secondly, the two base neural networks are trained on the
two re-weighted datasets, separately. Then we update training data weights according to prediction
scores of the two base networks on training data, so there is an interaction between the two base networks during the training process. When base networks are trained interactively with the purpose
of deliberately pushing each other in opposite directions, they will be complementary. This process of training network and updating weights is repeated
until a stop criterion is met.In this paper, we present the training and test procedure of the proposed ensemble method and evaluate it on the UIUC-Sports dataset BID9 ) and the LabelMe dataset BID11 with a comparison to Bagging, Adaboost, SnapShot Ensembling and other existing methods.
During the training process, we always keep the constraints W 1d +W 2d = 1 and 0 < W 1d , W 2d < 1, to ensure the base networks diverse and complementary.
Equation FORMULA10 and FORMULA11 are designed for updating weights of data points, so that the weight updating rule is sensitive to small differences between prediction probabilities from two base networks to prevent premature training.
Furthermore, if the prediction of a data point in one network is more accurate than another network, its weight in next round will be smaller than its weight for another network, thus making the training of individual network on more different regions.The training process generates many diverse training dataset pairs, as shown in Figure 3 .
That is, each base network will be trained on these diverse datasets in sequence, which is equivalent to that an "implicit" ensemble is applied on each base network.
Therefore, the base network will get more and more accurate during training process.
At the same time, the two networks are complementary to each other.In each iteration, determination of the number of epochs for training base networks is also crucial.
If the number is too large, the two base networks will fit training data too well, making it difficult to change data weights of to generate diverse datasets.
If it is too small, it is difficult to obtain accurate base classifiers.
In experiments, we find that a suitable epoch number in each iteration is the ones that make the classification accuracy of the base network fall in the interval of (0.9, 0.98).Similar
to Bagging and Adaboost, our method has no limitation on the type of neural networks. In addition
, it is straightforward to extend the proposed ensemble method for multiple networks, just by keeping DISPLAYFORM0 .., D}, in which
H is the number of base networks and 0 < W id < 1.
In the paper, we have proposed an ensemble method called InterBoost for training neural networks for small-sample classification and detailed the training and test procedures.
In the training procedure, the two base networks share information with each other in order to push each other optimized in different directions.
At the same time, each base network is trained on diverse datasets iteratively.
Experimental results on UIUC-Sports (UIUC) and LabelMe (LM) datasets showed that our ensemble method does not outperform other ensemble methods.
Future work includes improving the proposed method, increasing the number of networks, experimenting on different types of network as well as different kinds of data to evaluate the effectiveness of the InterBoost method. | In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:644 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Interpreting neural networks is a crucial and challenging task in machine learning.
In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights.
Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions.
We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices.
We demonstrate the performance of our method and the importance of discovered interactions via experimental results on both synthetic datasets and real-world application datasets.
Despite their strong predictive power, neural networks have traditionally been treated as "black box" models, preventing their adoption in many application domains.
It has been noted that complex machine learning models can learn unintended patterns from data, raising significant risks to stakeholders BID43 .
Therefore, in applications where machine learning models are intended for making critical decisions, such as healthcare or finance, it is paramount to understand how they make predictions BID6 BID17 .Existing
approaches to interpreting neural networks can be summarized into two types. One type
is direct interpretation, which focuses on 1) explaining
individual feature importance, for example by computing input gradients BID37 BID34 BID40 or by decomposing predictions BID2 BID36 , 2) developing attention-based models, which illustrate where neural networks focus during inference BID20 BID27 BID47 , and 3) providing
model-specific visualizations, such as feature map and gate activation visualizations BID48 BID21 . The other type
is indirect interpretation, for example post-hoc interpretations of feature importance BID32 and knowledge distillation to simpler interpretable models BID7 .It has been commonly
believed that one major advantage of neural networks is their capability of modeling complex statistical interactions between features for automatic feature learning. Statistical interactions
capture important information on where features often have joint effects with other features on predicting an outcome. The discovery of interactions
is especially useful for scientific discoveries and hypothesis validation. For example, physicists may be
interested in understanding what joint factors provide evidence for new elementary particles; doctors may want to know what interactions are accounted for in risk prediction models, to compare against known interactions from existing medical literature.In this paper, we propose an accurate and efficient framework, called Neural Interaction Detection (NID), which detects statistical interactions of any order or form captured by a feedforward neural network, by examining its weight matrices. Our approach is efficient because
it avoids searching over an exponential solution space of interaction candidates by making an approximation of hidden unit importance at the first hidden layer via all weights above and doing a 2D traversal of the input weight matrix. We provide theoretical justifications
on why interactions between features are created at hidden units and why our hidden unit importance approximation satisfies bounds on hidden unit gradients. Top-K true interactions are determined
from interaction rankings by using a special form of generalized additive model, which accounts for interactions of variable order BID46 BID25 . Experimental results on simulated datasets
and real-world datasets demonstrate the effectiveness of NID compared to the state-of-the-art methods in detecting statistical interactions.The rest of the paper is organized as follows: we first review related work and define notations in Section 2. In Section 3, we examine and quantify the
interactions encoded in a neural network, which leads to our framework for interaction detection detailed in Section 4. Finally, we study our framework empirically
and demonstrate its practical utility on real-world datasets in Section 5.
We presented our NID framework, which detects statistical interactions by interpreting the learned weights of a feedforward neural network.
The framework has the practical utility of accurately detecting general types of interactions without searching an exponential solution space of interaction candidates.
Our core insight was that interactions between features must be modeled at common hidden units, and our framework decoded the weights according to this insight.In future work, we plan to detect feature interactions by accounting for common units in intermediate hidden layers of feedforward networks.
We would also like to use the perspective of interaction detection to interpret weights in other deep neural architectures.A PROOF AND DISCUSSION FOR PROPOSITION 2Given a trained feedforward neural network as defined in Section 2.3, we can construct a directed acyclic graph G = (V, E) based on non-zero weights as follows.
We create a vertex for each input feature and hidden unit in the neural network: V = {v ,i |∀i, }, where v ,i is the vertex corresponding to the i-th hidden unit in the -th layer.
Note that the final output y is not included.
We create edges based on the non-zero entries in the weight matrices, i.e., DISPLAYFORM0 Note that under the graph representation, the value of any hidden unit is a function of parent hidden units.
In the following proposition, we will use vertices and hidden units interchangeably.
Proposition 2 (Interactions at Common Hidden Units).
Consider a feedforward neural network with input feature DISPLAYFORM1 , there exists a vertex v I in the associated directed graph such that I is a subset of the ancestors of v I at the input layer (i.e., = 0).Proof
. We prove
Proposition 2 by contradiction.Let I be an interaction where there is no vertex in the associated graph which satisfies the condition. Then, for
any vertex v L,i at the L-th layer, the value f i of the corresponding hidden unit is a function of its ancestors at the input layer I i where I ⊂ I i .Next, we group
the hidden units at the L-th layer into non-overlapping subsets by the first missing feature with respect to the interaction I. That is, for element
i in I, we create an index set S i ∈ [p L ]: DISPLAYFORM2 Note that the final output of the network is a weighed summation over the hidden units at the L-th layer: DISPLAYFORM3 Since that j∈Si w y j f j x Ij is not a function of x i , we have that ϕ (·) is a function without the interaction I, which contradicts our assumption.The reverse of this statement, that a common descendant will create an interaction among input features, holds true in most cases. The existence of counterexamples
is manifested when early hidden layers capture an interaction that is negated in later layers. For example, the effects of two
interactions may be directly removed in the next layer, as in the case of the following expression: max{w 1 x 1 + w 2 x 2 , 0} − max{−w 1 x 1 − w 2 x 2 , 0} = w 1 x 1 + w 2 x 2 . Such an counterexample is legitimate
; however, due to random fluctuations, it is highly unlikely in practice that the w 1 s and the w 2 s from the left hand side are exactly equal. | We detect statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:645 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning.
Despite its apparent successes in these settings, to the best of our knowledge there has been no systematic exploration of its capabilities on simple regression tasks.
In this work we characterize these on the UCI datasets, a popular benchmark for Bayesian regression models, as well as on the recently introduced ''gap'' datasets, which are better tests of out-of-distribution uncertainty.
We demonstrate that the neural linear model is a simple method that shows competitive performance on these tasks.
Despite the recent successes that neural networks have shown in an impressive range of tasks, they tend to be overconfident in their predictions (Guo et al., 2017) .
Bayesian neural networks (BNNs; Neal (1995) ) attempt to address this by providing a principled framework for uncertainty estimation in predictions.
However, inference in BNNs is intractable to compute, requiring approximate inference techniques.
Of these, Monte Carlo methods and variational methods, including Monte Carlo dropout (MCD) (Gal and Ghahramani, 2016) , are popular; however, the former are difficult to tune, and the latter are often limited in their expressiveness (Foong et al., 2019b; Yao et al., 2019; Foong et al., 2019a) .
The neural linear model represents a compromise between tractability and expressiveness for BNNs in regression settings: instead of attempting to perform approximate inference over the entire set of weights, it performs exact inference on only the last layer, where prediction can be done in closed form.
It has recently been used in active learning (Pinsler et al., 2019) , Bayesian optimization (Snoek et al., 2015) , reinforcement learning (Riquelme et al., 2018) , and AutoML (Zhou and Precioso, 2019), among others; however, to the best of our knowledge, there has been no systematic attempt to benchmark the model in the simple regression setting.
In this work we do so, first demonstrating the model on a toy example, followed by experiments on the popular UCI datasets (as in Hernández-Lobato and Adams (2015) ) and the recent UCI gap datasets from Foong et al. (2019b) , who identified (along with Yao et al. (2019) ) well-calibrated 'in-between' uncertainty as a desirable feature of BNNs.
In this section, we briefly describe the different models we train in this work, which are variations of the neural linear (NL) model, in which a neural network extracts features from the input to be used as basis functions for Bayesian linear regression.
The central issue in the neural linear model is how to train the network: in this work, we provide three different models, with a total of four different training methods.
For a more complete mathematical description of the models, refer to Appendix A; we summarize the models in Appendix C. Snoek et al. (2015) , we can first train the neural network using maximum a posteriori (MAP) estimation.
After this training phase, the outputs of the last hidden layer of the network are used as the features for Bayesian linear regression.
To reduce overfitting, the noise variance and prior variance (for the Bayesian linear regression) are subsequently marginalized out by slice sampling (Neal et al., 2003) according to the tractable marginal likelihood, using uniform priors.
We refer to this model as the maximum a posteriori neural linear model (which we abbreviate as MAP-L NL, where L is the number of hidden layers in the network).
We tune the hyperparameters for the MAP estimation via Bayesian optimization (Snoek et al., 2012) .
We have shown benchmark results for different variants of the neural linear model in the regression setting.
Our results show that the successes these models have seen in other areas such as reinforcement and active learning are not unmerited, with the models achieving generally good performance despite their simplicity.
Furthermore, they are not as susceptible to the the inability to express gap uncertainty as MFVI or MCD.
However, we have shown that to obtain reasonable performance extensive hyperparameter tuning is often required, unlike MFVI or MCD.
Finally, our work suggests that exact inference on a subset of parameters can perform better than approximate inference on the entire set, at least for BNNs.
We believe this broader issue is worthy of further investigation. | We benchmark the neural linear model on the UCI and UCI "gap" datasets. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:646 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The reproducibility of reinforcement-learning research has been highlighted as a key challenge area in the field.
In this paper, we present a case study in reproducing the results of one groundbreaking algorithm, AlphaZero, a reinforcement learning system that learns how to play Go at a superhuman level given only the rules of the game.
We describe Minigo, a reproduction of the AlphaZero system using publicly available Google Cloud Platform infrastructure and Google Cloud TPUs.
The Minigo system includes both the central reinforcement learning loop as well as auxiliary monitoring and evaluation infrastructure.
With ten days of training from scratch on 800 Cloud TPUs, Minigo can play evenly against LeelaZero and ELF OpenGo, two of the strongest publicly available Go AIs.
We discuss the difficulties of scaling a reinforcement learning system and the monitoring systems required to understand the complex interplay of hyperparameter configurations.
In March 2016, Google DeepMind's AlphaGo BID0 defeated world champion Lee Sedol by using two deep neural networks (a policy and a value network) and Monte Carlo Tree Search (MCTS) to synthesize the output of these two neural networks.
The policy network was trained via supervised learning from human games, and the value network was trained from a much larger corpus of synthetic games generated by sampling game trajectories from the policy network.
AlphaGo Zero BID1 , published in October 2017, described a continuous pipeline, which when initialized with random weights, could train itself to defeat the original AlphaGo system.
The requirement for expert human data was replaced with a requirement for vast amounts of compute: approximately two thousand TPUs were used for 72 hours to train AlphaGo Zero to its full strength.
AlphaZero BID2 presents a refinement of the AlphaGoZero pipeline, notably removing the gating mechanism for publishing new models.In many ways, AlphaGo Zero can be seen as the logical culmination of fully automating and streamlining the bootstrapping process: the original AlphaGo system was bootstrapped from expert human data and reached a final strength that was somewhat stronger than the best humans.
Then, by generating new training data with the stronger AlphaGo system and repeating the bootstrap process, an even stronger system was created.
By automating the bootstrapping process until it is continuous, a system is created that can train itself to surpass human levels of play, even when starting from random play.In this paper, we discuss our experiences creating Minigo.
About half of our effort went into rebuilding the infrastructure necessary to coordinate a thousand selfplay workers.
The other half of the effort went into monitoring infrastructure to test and verify that what we had built was bug-free.
Despite having at hand a paper describing the final architecture of AlphaZero, we rediscovered the hard way which components of the system were absolutely necessary to get right, and which components we could be messy with.
It stands to reason that without the benefit of pre-existing work, monitoring systems are even more important in the discovery process.
We discuss in particular, | We reproduced AlphaZero on Google Cloud Platform | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:647 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative adversarial networks (GANs) train implicit generative models through solving minimax problems.
Such minimax problems are known as nonconvex- nonconcave, for which the dynamics of first-order methods are not well understood.
In this paper, we consider GANs in the type of the integral probability metrics (IPMs) with the generator represented by an overparametrized neural network.
When the discriminator is solved to approximate optimality in each iteration, we prove that stochastic gradient descent on a regularized IPM objective converges globally to a stationary point with a sublinear rate.
Moreover, we prove that when the width of the generator network is sufficiently large and the discriminator function class has enough discriminative ability, the obtained stationary point corresponds to a generator that yields a distribution that is close to the distribution of the observed data in terms of the total variation.
To the best of our knowledge, we seem to first establish both the global convergence and global optimality of training GANs when the generator is parametrized by a neural network. | We establish global convergence to optimality for IPM-based GANs where the generator is an overparametrized neural network. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:648 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram.
Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE).
Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes.
We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings.
Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.
We investigated attributed node embedding and proposes efficient pooled (AE) and multi-scale (MUSAE) attributed node embedding algorithms with linear runtime.
We proved that these algorithms implicitly factorize probability matrices of features appearing in the neighbourhood of nodes.
Two widely used neighbourhood preserving node embedding methods Perozzi et al. (2014; are in fact simplified cases of our models.
On several datasets (Wikipedia, Facebook, Github, and citation networks) we found that representations learned by our methods, in particular MUSAE, outperform neighbourhood based node embedding methods (Perozzi et al. (2014) ; Grover & Leskovec (2016) Our proposed embedding models are differentiated from other methods in that they encode feature information from higher order neighborhoods.
The most similar previous model BANE (Yang et al., 2018) encodes node attributes from higher order neighbourhoods but has non-linear runtime complexity and the product of adjacency matrix power and feature matrix is decomposed explicitly.
A PROOFS Lemma 1.
The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps
(i) after; or
(ii) before node v ∈ V, as given by: | We develop efficient multi-scale approximate attributed network embedding procedures with provable properties. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:649 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Music relies heavily on repetition to build structure and meaning.
Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure.
The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence.
This suggests that self-attention might also be well-suited to modeling music.
In musical composition and performance, however, relative timing is critically important.
Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018).
This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length.
We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length.
This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies.
We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter.
A musical piece often consists of recurring elements at various levels, from motifs to phrases to sections such as verse-chorus.
To generate a coherent piece, a model needs to reference elements that came before, sometimes in the distant past, and then repeat, vary, and further develop them to create contrast and surprise.
Intuitively, self-attention (Parikh et al., 2016) could be a good match for this task.
Self-attention over its own previous outputs allows an autoregressive model to access any part of the previously generated output at every step of generation.
By contrast, recurrent neural networks have to learn to proactively store elements to be referenced in a fixed size state or memory, making training potentially much more difficult.
We believe that repeating self-attention in multiple, successive layers of a Transformer decoder BID17 can help capture the multiple levels at which self-referential phenomena exist in music.In its original formulation, the Transformer relies on absolute position representations, using either positional sinusoids or learned position embeddings that are added to the per-position input representations.
Recurrent and convolutional neural networks instead model position in relative terms: RNNs through their recurrence over the positions in their input, and CNNs by applying kernels that effectively choose which parameters to apply based on the relative position of the covered input representations.Music has multiple dimensions along which relative differences arguably matter more than their absolute values; the two most prominent are timing and pitch.
To capture such pairwise relations between representations, BID13 introduce a relation-aware version of self-attention which they use successfully to modulate self-attention by the distance between two positions.
We extend this approach to capture relative timing and optionally also pitch, which yields improvement in both sample quality and perplexity for the JSB Chorales dataset.
As opposed to the original Transformer, samples from a Transformer with our relative attention mechanism maintain the regular timing grid present in this dataset.
The model furthermore captures global timing, giving rise to regular phrases.The original formulation of relative attention BID13 requires O(L 2 D) memory where L is the sequence length and D is the dimension of the model's hidden state.
This is prohibitive for long sequences such as those found in the Maestro dataset of human-performed virtuosic, classical piano music BID7 .
In Section 3.4, we show how to reduce the memory requirements to O(LD), making it practical to apply relative attention to long sequences.The Maestro dataset consists of MIDI recorded from performances of competition participants, bearing expressive dynamics and timing on a less than 10-millisecond granularity.
Discretizing time in a fixed grid on such a resolution would yield unnecessarily long sequences as not all events change on the same timescale.
We hence adopt a sparse, MIDI-like, event-based representation from (Oore et al., 2018) , allowing a minute of music with a 10-millisecond resolution to be represented at lengths around 2K.
This is in contrast to a 6K to 18K length that would be needed on a serialized multi-attribute fixed-grid representation.
As position in sequence no longer corresponds to time, a priori it is not obvious that relative attention should work as well with such a representation.
However, we will show in Section 4.2 that it does improve perplexity and sample quality over strong baselines.We speculate that idiomatic piano gestures such as scales, arpeggios and other motifs all exhibit a certain grammar and recur periodically, hence knowing their relative positional distances makes it easier to model this regularity.
This inductive bias towards learning relational information, as opposed to patterns based on absolute position, suggests that the Transformer with relative attention could generalize beyond the lengths it was trained on, which our experiments in Section 4.2.1 confirm.
In this work we demonstrated that the Transformer equipped with relative attention is very well-suited for generative modeling of symbolic music.
The compelling long-term structure in the samples from our model leaves us enthusiastic about this direction of research.
Moreover, the ability to expand upon a prime, in particular, suggests potential applications as creative tool.The significant improvement from relative attention highlights a shortcoming of the original Transformer that might also limit its performance in other domains.
Improving the Transformer's ability to capture periodicity at various time scales, for instance, or relations between scalar features akin to pitch could improve time-series models.
Our memory-efficient implementation enables the application of relative attention to much longer sequences such as long texts or even audio waveforms, which significantly broadens the range of problems to which it could be applied. | We show the first successful use of Transformer in generating music that exhibits long-term structure. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:65 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples.
While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult.
In this paper, we present
1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline,
2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and
3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms.
Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones.
In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification.
The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes).
The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently.
In contrast, the human visual systems can recognize new classes with extremely few labeled examples.
It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class.The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention BID29 ; BID27 ; BID6 ; BID25 ; BID28 ; BID9 ; BID24 .
One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization.
Examples include model initialization based methods BID25 ; BID6 , metric learning methods BID29 ; BID27 ; BID28 , and hallucination based methods BID0 ; BID11 ; BID31 .
Another line of work BID10 ; BID24 also demonstrates promising results by directly predicting the weights of the classifiers for novel classes.Limitations.
While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress.
First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain.
The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation).
Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset.
The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic.Our work.
In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem.
First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground.
Our results show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes.
Second, by replacing the linear classifier with a distance-based classifier as used in BID10 ; BID24 , the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms.
Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories).
Our results show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting.
Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field.
1 Our contributions.1.
We provide a unified testbed for several different few-shot classification algorithms for a fair comparison.
Our empirical evaluation results reveal that the use of a shallow backbone commonly used in existing work leads to favorable results for methods that explicitly reduce intra-class variation.
Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited.2.
We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets.3.
We investigate a practical evaluation setting where base and novel classes are sampled from different domains.
We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning.
In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification.
Through comparing methods on a common ground, our results show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone.
Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes.
By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning. | A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:650 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Temporal logics are useful for describing dynamic system behavior, and have been successfully used as a language for goal definitions during task planning.
Prior works on inferring temporal logic specifications have focused on "summarizing" the input dataset -- i.e., finding specifications that are satisfied by all plan traces belonging to the given set.
In this paper, we examine the problem of inferring specifications that describe temporal differences between two sets of plan traces.
We formalize the concept of providing such contrastive explanations, then present a Bayesian probabilistic model for inferring contrastive explanations as linear temporal logic specifications.
We demonstrate the efficacy, scalability, and robustness of our model for inferring correct specifications across various benchmark planning domains and for a simulated air combat mission.
In a meeting where multiple plan options are under deliberation by a team, it would be helpful for that team's resolution process if someone could intuitively explain how the plans under consideration differ from one another.
Also, given a need to identify differences in execution behavior between distinct groups of users (e.g., a group of users who successfully completed a task using a particular system versus those who did not), explanations that identify distinguishing patterns between group behaviors can yield valuable analytics and insights toward iterative system refinement.In this paper, we seek to generate explanations for how two sets of divergent plans differ.
We focus on generating such contrastive explanations by discovering specifications satisfied by one set of plans, but not the other.
Prior works on plan explanations include those related to plan recognition for inferring latent goals through observations BID25 BID35 , works on system diagnosis and excuse generation in order to explain plan failures BID29 BID10 , and those focused on synthesizing "explicable" plans -i.e., plans that are self-explanatory with respect to a human's mental model BID16 .
The aforementioned works, however, only involve the explanation or generation of a single plan; we instead focus on explaining differences between multiple plans, which can be helpful in various applications, such as the analysis of competing systems and compliance models, and detecting anomalous behaviour of users.A specification language should be used in order to achieve clear and effective plan explanations.
Prior works have considered surface-level metrics such as plan cost and action (or causal link) similarity measures to describe plan differences BID23 BID3 .
In this work, we leverage linear temporal logic (LTL) BID24 which is an expressive language for capturing temporal relations of state variables.
We use a plan's individual satisfaction (or dissatisfaction) of LTL specifications to describe their differences.LTL specifications have been widely used in both industrial systems and planning algorithms to compactly describe temporal properties BID32 .
They are human interpretable when expressed as compositions of predefined templates; inversely, they can be constructed from natural language descriptions BID7 ) and serve as natural patterns when encoding high-level human strategies for planning constraints BID14 .Although
a suite of LTL miners have been developed for software engineering and verification purposes BID32 BID17 BID28 , they primarily focus on mining properties that summarize the overall behavior on a single set of plan traces. Recently
, BID22 presented SAT-based algorithms to construct a LTL specification that asserts contrast between two sets of traces. The algorithms
, however, are designed to output only a single explanation, and are susceptible to failure when the input contains imperfect traces. Similar to Neider
and Gavran, our problem focuses on mining contrastive explanations between two sets of traces, but we adopt a probabilistic approach -we present a Bayesian inference model that can generate multiple explanations while demonstrating robustness to noisy input. The model also permits
scalability when searching in large hypothesis spaces and allows for flexibility in incorporating various forms of prior knowledge and system designer preferences. We demonstrate the efficacy
of our model for extracting correct explanations on plan traces across various benchmark planning domains and for a simulated air combat mission.Plan explanations are becoming increasingly important as automated planners and humans collaborate. This first involves humans
making sense of the planner's output (e.g., PDDL plans), where prior work has focused on developing user-friendly interfaces that provide graphical visualizations to describe the causal links and temporal relations of plan steps BID1 BID26 BID21 . The outputs of these systems
, however, require an expert for interpretation and do not provide a direct explanation as to why the planner made certain decisions to realize the outputted plan.Automatic generation of explanations has been studied in goal recognition settings, where the objective is to infer the latent goal state that best explains the incomplete sequence of observations BID25 BID30 . Works on explicable planning
emphasize the generation of plans that are deemed selfexplanatory, defined in terms of optimizing plan costs for a human's mental model of the world BID16 . Mixed-initiative planners iteratively
revise their plan generation based on user input (e.g. action modifications), indirectly promoting an understanding of differences across newly generated plans through continual user interaction BID27 BID3 . All aforementioned works deal with explainability
with respect to a single planning problem specification, whereas our model deals with explaining differences in specifications governing two distinct sets of plans given as input.Works on model reconciliation focus on producing explanations for planning models (i.e. predicates, preconditions and effects), instead of the realized plans . Explanations are specified in the form of model updates
, iteratively bringing an incomplete model to a more complete world model. The term, "contrastive explanation," is used in these
works to identify the relevant differences between the input pair of models. Our work is similar in spirit but focuses on producing
a specification of differences in the constraints satisfied among realized plans. Our approach takes sets of observed plans as input rather
than planning models.While model updates are an important modality for providing plan explanations, there are certain limitations. We note that an optimal plan generated with respect to a
complete environment/world model is not always explicable or self-explanatory. The space of optimal plans may be large, and the underlying
preference or constraint that drives the generation of a particular plan may be difficult to pre-specify and incorporate within the planning model representation. We focus on explanations stemming directly from the realized
plans themselves. Environment/world models (e.g. PDDL domain files) can be helpful
in providing additional context, but are not necessary for our approach.Our work leverages LTL as an explanation language. Temporal patterns can offer greater expressivity and explanatory
power in describing why a set of plans occurred and how they differ, and may reveal hidden plan dynamics that cannot be captured by the use of surface-level metrics like plan cost or action similarities. Our work on using LTL for contrastive explanations directly contributes
to exploring how we can answer the following roadmap questions for XAIP BID9 : "why did you do that? why didn't you do something else (that I would have done)?" Prior research into mining LTL specifications has focused on generating
a "summary" explanation of the observed traces. BID12 explored mining globally persistent specifications from demonstrated
action traces for a finite state Markov decision process. BID17 introduced Texada, a system for mining all possible instances of a given
LTL template from an output log where each unique string is represented as a new proposition. BID28 proposed a template-based probabilistic model to infer task specifications
given a set of demonstrations. However, all of these approaches focus on inferring a specification that all the
demonstrated traces satisfy.For contrastive explanations, Neider and Gavran (2018) presented SAT-based algorithms to infer a LTL specification that delineates between the positive and negative sets of traces. Unlike existing LTL miners, the algorithms construct an arbitrary, minimal LTL specification
without requiring predefined templates. However, they are designed to output only a single specification, and can fail when the sets
contain imperfect traces (i.e., if there exists no specification consistent with every single input trace.). We present a probabilistic model for the same problem and generate multiple contrastive explanations
while offering robustness to noisy input.Some works have proposed algorithms to infer contrastive explanations for continuous valued time-series data based on restricted signal temporal logic (STL) grammar BID33 BID15 . However, the continuous space semantics of STL and a restricted subset of temporal operators make the
grammar unsuitable for use with planning domain problems. To the best of our knowledge, our proposed model is the first probabilistic model to infer contrastive
LTL specifications for sets of traces in domains defined by PDDL.
The runtime for our model and the delimited enumeration baseline with 2,000 samples ranged between 1.2-4.7 seconds (increase in |V | only had marginal effect on the runtime).
The SAT-based miner by Neider and Gavran often failed to generate a solution within a five minute cutoff (see the number of its timeout cases in the last column of TAB2 ).
The prior work can only output a single ϕ * , which frequently took on a form of Fp i .
It did not scale well to problems that required more complex ϕ as solutions.
This is because increasing the "depth" of ϕ (the number of temporal / Boolean operators and propositions) exponentially increased the size of the compiled SAT problem.
In our experiments, the prior work timed out for problems requiring solutions with depth ≥ 3 (note that Fp i has depth of 2).
Robustness to Noisy Input In order to test robustness, we perturbed the input X by randomly swapping traces between π A and π B .
For example, a noise rate of 0.2 would swap 20% of the traces, where the accuracy of ϕ ground on the perturbed data, X = ( π A , π B ), would evaluate to 0.8 (note that it may be possible to discover other ϕ that achieve better accuracy on X).
The MAP estimates inferred from X, { ϕ * }, were evaluated on the original input X to assess any loss of ability to provide contrast.
Figure 3 shows the average accuracy of { ϕ * }, evaluated on both X and X, across varying noise rate.
Even at a moderate noise rate of 0.25, the inferred ϕ * s were able to maintain an average accuracy greater than 0.9 on X. Such a threshold is promising for real-world applications.
The robustness did start to sharply decline as noise rate increased past 0.4.
For all test cases, the Neider and Gavran miner failed to generate a solution for anything with a noise rate ≥ 0.1.
We have presented a probabilistic Bayesian model to infer contrastive LTL specifications describing how two sets of plan traces differ.
Our model generates multiple contrastive explanations more efficiently than the state-of-the-art and demonstrates robustness to noisy input.
It also provides a principled approach to incorporate various forms of prior knowledge or preferences during search.
It can serve as a strong foundation that can be naturally extended to multiple input sets by repeating the algorithm for all pairwise or one-vs.-rest
comparisons.Interesting avenues for future work include gauging the saliency of propositions, as well as deriving a minimal set of contrastive explanations. Furthermore
, we seek to test the model in human-in-the-loop settings, with the goal of understanding the relationship between different planning heuristics for the saliency of propositions (e.g. landmarks and causal links) to their actual explicability when the explanation is communicated to a human. | We present a Bayesian inference model to infer contrastive explanations (as LTL specifications) describing how two sets of plan traces differ. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:651 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations.
We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine).
Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes.
The generators of the zonotopes are precise functions of the neural network parameters.
We utilize this geometric characterization to shed light and new perspective on three tasks.
In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries.
Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network.
We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input).
In this paper, we leverage tropical geometry to characterize the decision boundaries of neural networks in the form (Affine, ReLU, Affine) and relate it to well-studied geometric objects such as zonotopes and polytopes.
We leaverage this representation in providing a tropical perspective to support the lottery ticket hypothesis, network pruning and designing adversarial attacks.
One natural extension for this work is a compact derivation for the characterization of the decision boundaries of convolutional neural networks (CNNs) and graphical convolutional networks (GCNs).
Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse, Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich, and Shravya Shetty.
End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.
Nature Medicine, 2019.
A PRELIMINARIES AND DEFINITIONS.
Fact
1. P+Q = {p + q, ∀p ∈ P and q ∈ Q} is the Minkowski sum between two sets P and Q. Fact
2. Let f be a tropical polynomial and let a ∈ N. Then
Let both f and g be tropical polynomials, Then
Note that V(P(f )) is the set of vertices of the polytope P(f ). | Tropical geometry can be leveraged to represent the decision boundaries of neural networks and bring to light interesting insights. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:652 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks.
Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information.
In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss.
Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK).
Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD.
We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic.
Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works.
Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD.
First-order methods such as Stochastic Gradient Descent (SGD) are currently the standard choice for training deep neural networks.
The merit of first-order methods is obvious: they only calculate the gradient and therefore are computationally efficient.
In addition to better computational efficiency, SGD has even more advantages among the first-order methods.
At each iteration, SGD computes the gradient only on a mini-batch instead of all training data.
Such randomness introduced by sampling the mini-batch can lead to better generalization (Hardt et al., 2015; Keskar et al., 2016; Masters & Luschi, 2018; Mou et al., 2017; Zhu et al., 2018) and better convergence (Ge et al., 2015; Jin et al., 2017a; b) , which is crucial when the function class is highly overparameterized deep neural networks.
Recently there is a huge body of works trying to develop more efficient first-order methods beyond SGD (Duchi et al., 2011; Kingma & Ba, 2014; Luo et al., 2019; Liu et al., 2019) .
Second-order methods, despite their better convergence rate, are rarely used to train deep neural networks.
At each iteration, the algorithm has to compute second order information, for example, the Hessian or its approximation, which is typically an m by m matrix where m is the number of parameters of the neural network.
Moreover, the algorithm needs to compute the inverse of this matrix.
The computational cost is prohibitive and usually it is not even possible to store such a matrix.
Formula and require subtle implementation tricks to use backpropagation.
In contrast, GGN has simpler update rule and better guarantee for neural networks.
In a concurrent and independent work, Zhang et al. (2019a) showed that natural gradient method and K-FAC have a linear convergence rate for sufficiently wide networks in full-batch setting.
In contrast, our method enjoys a higher-order (quadratic) convergence rate guarantee for overparameterized networks, and we focus on developing a practical and theoretically sound optimization method.
We also reveal the relation between our method and NTK kernel regression, so using results based on NTK (Arora et al., 2019b) , one can easily give generalization guarantee of our method.
Another independent work (Achiam et al., 2019) proposed a preconditioned Q-learning algorithm which has similar form of our update rule.
Unlike the methods considered in Zhang et al. (2019a) ; Achiam et al. (2019) which contain the learning rate that needed to be tuned, our derivation of GGN does not introduce a learning rate term (or understood as suggesting that the learning rate can be fixed to be 1 to get good performance which is verified in Figure 2 (c)).
We propose a novel Gram-Gauss-Newton (GGN) method for solving regression problems with square loss using overparameterized neural networks.
Despite being a second-order method, the computation overhead of the GGN algorithm at each iteration is small compared to SGD.
We also prove that if the neural network is sufficiently wide, GGN algorithm enjoys a quadratic convergence rate.
Experimental results on two regression tasks demonstrate that GGN compares favorably to SGD on these data sets with standard network architectures.
Our work illustrates that second-order methods have the potential to compete with first-order methods for learning deep neural networks with huge number of parameters.
In this paper, we mainly focus on the regression task, but our method can be easily generalized to other tasks such as classification as well.
Consider the k-category classification problem, the neural network outputs a vector with k entries.
Although this will increase the computational complexity of getting the Jacobian whose size increases k times, i.e., J ∈ R (bk)×m , each row of J can be still computed in parallel, which means the extra cost only comes from parallel computation overhead when we calculate in a fully parallel setting.
While most first-order methods for training neural networks can hardly make use of the computational resource in parallel or distributed settings to accelerate training, our GGN method can exploit this ability.
For first-order methods, basically extra computational resource can only be used to calculate more gradients at a time by increasing batch size, which harms generalization a lot.
But for GGN, more resource can be used to refine the gradients and achieve accelerated convergence speed with the help of second-order information.
It is an important future work to study the application of GGN to classification problems. | A novel Gram-Gauss-Newton method to train neural networks, inspired by neural tangent kernel and Gauss-Newton method, with fast convergence speed both theoretically and experimentally. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:653 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures?
We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018), which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer (Radford et al., 2018) and BERT (Devlin et al., 2018).
We fine-tune these encoders to do acceptability classification over CoLA and compare the models’ performance on the annotated analysis set.
Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model.
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years.
Recent sentence encoders like OpenAI's Generative Pretrained Transformer (GPT; Radford et al., 2018) and BERT (Devlin et al., 2018) achieve the state of the art on the GLUE benchmark (Wang et al., 2018) .
Among the GLUE tasks, these stateof-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018) .
CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features.
Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best.
Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by Warstadt et al. (2018) .
The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.We identify many specific syntactic features that make sentences harder to classify, and many that have little effect.
For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn.
We also find features of sentences that accentuate or minimize the differences between models.
Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA.
We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.Our findings can guide future work on sentence embeddings.
A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations.
Future engineering work should investigate whether switching to a character-level model can mitigate this problem.
Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena.
It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences.
This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance.
Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence.
Future experiments following Ettinger et al. (2018) and Kann et al. (2019) can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
(1) Included
a. John owns the book.
(37)
b. Park Square has a festive air.
(131)
c. *Herself likes Mary's mother.
FORMULA0 (2) Excluded
a. Bill has eaten cake.
b. I gave Joe a book. | We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:654 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
When considering simultaneously a finite number of tasks, multi-output learning enables one to account for the similarities of the tasks via appropriate regularizers.
We propose a generalization of the classical setting to a continuum of tasks by using vector-valued RKHSs.
Several fundamental problems in machine learning and statistics can be phrased as the minimization of a loss function described by a hyperparameter.
The hyperparameter might capture numerous aspects of the problem:
(i) the tolerance w.
r.
t.
outliers as the -insensitivity in Support Vector Regression (Vapnik et al., 1997) ,
(ii) importance of smoothness or sparsity such as the weight of the l 2 -norm in Tikhonov regularization (Tikhonov & Arsenin, 1977) , l 1 -norm in LASSO (Tibshirani, 1996) , or more general structured-sparsity inducing norms BID3 ,
(iii) Density Level-Set Estimation (DLSE), see for example one-class support vector machines One-Class Support Vector Machine (OCSVM, Schölkopf et al., 2000) ,
(iv) confidence as exemplified by Quantile Regression (QR, Koenker & Bassett Jr, 1978) , or
(v) importance of different decisions as implemented by Cost-Sensitive Classification (CSC, Zadrozny & Elkan, 2001) .
In various cases including QR, CSC or DLSE, one is interested in solving the parameterized task for several hyperparameter values.
Multi-Task Learning (Evgeniou & Pontil, 2004 ) provides a principled way of benefiting from the relationship between similar tasks while preserving local properties of the algorithms: ν-property in DLSE (Glazer et al., 2013) or quantile property in QR (Takeuchi et al., 2006) .A
natural extension from the traditional multi-task setting is to provide a prediction tool being able to deal with any value of the hyperparameter. In
their seminal work, (Takeuchi et al., 2013) extended multi-task learning by considering an infinite number of parametrized tasks in a framework called Parametric Task Learning (PTL) . Assuming
that the loss is piecewise affine in the hyperparameter, the authors are able to get the whole solution path through parametric programming, relying on techniques developed by Hastie et al. (2004) .In this paper
1 , we relax the affine model assumption on the tasks as well as the piecewise-linear assumption on the loss, and take a different angle. We propose Infinite
Task Learning (ITL) within the framework of functionvalued function learning to handle a continuum number of parameterized tasks using Vector-Valued Reproducing Kernel Hilbert Space (vv-RKHS, Pedrick, 1957) . | We propose an extension of multi-output learning to a continuum of tasks using operator-valued kernels. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:655 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We analyze the joint probability distribution on the lengths of the
vectors of hidden variables in different layers of a fully connected
deep network, when the weights and biases are chosen randomly according to
Gaussian distributions, and the input is binary-valued.
We show
that, if the activation function satisfies a minimal set of
assumptions, satisfied by all activation functions that we know that
are used in practice, then, as the width of the network gets large,
the ``length process'' converges in probability to a length map
that is determined as a simple function of the variances of the
random weights and biases, and the activation function.
We also show that this convergence may fail for activation functions
that violate our assumptions. | We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:656 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning.
However, most current data augmentation implementations applied in meta-learning are the same as those used in the conventional image classification.
In this paper, we introduce a new data augmentation method for meta-learning, which is named as ``Task Level Data Augmentation'' (referred to Task Aug).
The basic idea of Task Aug is to increase the number of image classes rather than the number of images in each class.
In contrast, with a larger amount of classes, we can sample more diverse task instances during training.
This allows us to train a deep network by meta-learning methods with little over-fitting.
Experimental results show that our approach achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks.
Once paper is accepted, we will provide the link to code.
Although the machine learning systems have achieved a human-level ability in many fields with a large amount of data, learning from a few examples is still a challenge for modern machine learning techniques.
Recently, the machine learning community has paid significant attention to this problem, where few-shot learning is the common task for meta-learning (e.g., Ravi & Larochelle (2017) ; Finn et al. (2017) ; ; Snell et al. (2017) ).
The purpose of few-shot learning is to learn to maximize generalization accuracy across different tasks with few training examples.
In a classification application of the few-shot learning, tasks are generated by sampling from a conventional classification dataset; then, training samples are randomly selected from several classes in the classification dataset.
In addition, a part of the examples is used as training examples and testing examples.
Thus, a tiny learning task is formed by these examples.
The meta-learning methods are applied to control the learning process of a base learner, so as to correctly classify on testing examples.
Data augmentation is widely used to improve the training of deep learning models.
Usually, the data augmentation is regarded as an explicit form of regularization He et al. (2016) ; Simonyan & Zisserman (2014) ; .
Thus, the data augmentation aims at artificially generating the training data by using various translations on existing data, such as: adding noises, cropping, flipping, rotation, translation, etc.
The general idea of data augmentations is increasing the number of data by change data slightly to be different from original data, but the data still can be recognized by human.
The new data involved in the classes are identical to the original data.
However, the minimum units of meta-learning are the tasks rather than data.
Increasing the data of original class cannot increase the types of task instances.
Therefore, "Task Aug" increases the data that can be clearly recognized as the different classes as the original data.
With novel classes, the more diverse task instances can be generated.
This is important for the meta-learning, since metalearning models must predict unseen classes during the testing phase.
Therefore, a larger number of classes is helpful for models to generate task instances with different classes.
In this work, the natural images are augmented by being rotated 90, 180, 270 degrees (we show examples in Figure 1 ).
We compare two cases,
1) the new images are converted to the classes of original images and
2) the new images are separated to the new classes.
The proposed method is evaluated by experiments with the state of art meta-learning Methods Snell et al. (2017) (2017) .
The experimental result analysis shows that Task Aug can reduce over-fitting and improve the performance, while the conventional data augmentation (referred to Data Aug) of rotation, which converts the novel data into the classes of original data, does not improve the performance and even causes the worse result.
In the comparative experiments, Task Aug achieves the best accuracy of the meta-learning methods applied.
Besides, the best results of our experiments exceed the current state-of-art result over a large margin. | We propose a data augmentation approach for meta-learning and prove that it is valid. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:657 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge.
" Our generalized framework applies to the case of classification models and takes as input the architecture of a ``teacher" network, a general posterior expectation of interest, and the architecture of a ``student" network.
The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model.
We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off.
We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures.
We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance.
Lastly, we show that student architecture search methods can identify student models with significantly improved performance.
Deep learning models have shown promising results in the areas including computer vision, natural language processing, speech recognition, and more (Krizhevsky et al., 2012; Graves et al., 2013a; b; Huang et al., 2016; Devlin et al., 2018) .
However, existing point estimation-based training methods for these models may result in predictive uncertainties that are not well calibrated, including the occurrence of confident errors.
It is well-known that Bayesian inference can often provide more robust posterior predictive distributions in the classification setting compared to the use of point estimation-based training.
However, the integrals required to perform Bayesian inference in neural network models are also well-known to be intractable.
Monte Carlo methods provide one solution to representing neural network parameter posteriors as ensembles of networks, but this can require large amounts of both storage and compute time (Neal, 1996; Welling & Teh, 2011) .
To help overcome these problems, Balan et al. (2015) introduced an interesting model training method referred to as Bayesian Dark Knowledge.
In the classification setting, Bayesian Dark Knowledge attempts to compress the Bayesian posterior predictive distribution induced by the full parameter posterior of a "teacher" network into a "student" network.
The parameter posterior of the teacher network is represented through a Monte Carlo ensemble of specific instances of the teacher network (the teacher ensemble), and the analytically intractable posterior predictive distributions are approximated as Monte Carlo averages over the output of the networks in the teacher ensemble.
The major advantage of this approach is that the computational complexity of prediction at test time is drastically reduced compared to computing Monte Carlo averages over a large ensemble of networks.
As a result, methods of this type have the potential to be much better suited to learning models for deployment in resource constrained settings.
In this paper, we present a Bayesian posterior distillation framework that generalizes the Bayesian Dark Knowledge approach in several significant directions.
The primary modeling and algorithmic contributions of this work are: (1) we generalize the target of distillation in the classification case from the posterior predictive distribution to general posterior expectations; (2) we generalize the student architecture from being restricted to match the teacher architecture to being a free choice in the distillation procedure.
The primary empirical contributions of this work are (1) evaluating the distillation of both the posterior predictive distribution and expected posterior entropy across a range of models and data sets including manipulations of data sets that increase posterior uncertainty; and (2) evaluating the impact of the student model architecture on distillation performance including the investigation of sparsity-inducing regularization and pruning for student model architecture optimization.
The key empirical findings are that (1) distilling into a student model that matches the architecture of the teacher, as in Balan et al. (2015) , can be sub-optimal; and (2) student architecture optimization methods can identify significantly improved student models.
We note that the significance of generalizing distillation to arbitrary posterior expectations is that it allows us to capture a wider range of useful statistics of the posterior that are of interest from an uncertainty quantification perspective.
As noted above, we focus on the case of distilling the expected posterior entropy in addition to the posterior predictive distribution itself.
When combined with the entropy of the posterior predictive distribution, the expected posterior entropy enables disentangling model uncertainty (epistemic uncertainty) from fundamental uncertainty due to class overlap (aleatoric uncertainty).
This distinction is extremely important in determining why predictions are uncertain for a given data case.
Indeed, the difference between these two terms is the basis for the Bayesian active learning by disagreement (BALD) score used in active learning, which samples instances with the goal of minimizing model uncertainty (Houlsby et al., 2011) .
The remainder of this paper is organized as follows.
In the next section, we begin by presenting background material and related work in Section 2.
In Section 3, we present the proposed framework and associated Generalized Posterior Expectation Distillation (GPED) algorithm.
In Section 4, we present experiments and results.
Additional details regarding data sets and experiments can be found in Appendix A, with supplemental results included in Appendix B.
We have presented a framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network that generalizes the Bayesian Dark Knowledge approach in several significant directions.
Our results show that the performance of posterior distillation can be highly sensitive to the architecture of the student model, but that basic architecture search methods can help to identify student model architectures with improved speed-storage-accuracy trade-offs.
There are many directions for future work including considering the distillation of a broader class of posterior statistics including percentiles, assessing and developing more advanced student model architecture search methods, and applying the framework to larger state-of-the-art models.
A DATASETS AND MODEL DETAILS | A general framework for distilling Bayesian posterior expectations for deep neural networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:658 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Variational Autoencoders (VAEs) have proven to be powerful latent variable models.
How- ever, the form of the approximate posterior can limit the expressiveness of the model.
Categorical distributions are flexible and useful building blocks for example in neural memory layers.
We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers.
The Concrete/Gumbel-Softmax relaxation allows maximizing a surrogate of the Evidence Lower Bound by stochastic gradient ascent.
We show that, when using a limited number of latent variables, HD-VAE outperforms the Gaussian baseline on modelling multiple binary image datasets.
Training very deep HD-VAE remains a challenge due to the relaxation bias that is induced by the use of a surrogate objective.
We introduce a formal definition and conduct a preliminary theoretical and empirical study of the bias.
Unsupervised learning has proven powerful at leveraging vast amounts of raw unstructured data (Kingma et al., 2014; Radford et al., 2017; Peters et al., 2018; Devlin et al., 2018) .
Through unsupervised learning, latent variable models learn the explicit likelihood over an unlabeled dataset with an aim to discover hidden factors of variation as well as a generative process.
An example hereof, is the Variational Autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014 ) that exploits neural networks to perform amortized approximate inference over the latent variables.
This approximation comes with limitations, both in terms of the latent prior and the amortized inference network (Burda et al., 2015; Hoffman and Johnson, 2016) .
It has been proposed to go beyond Gaussian priors and approximate posterior using, for instance, autoregressive flows (Chen et al., 2016; Kingma et al., 2016) , a hierarchy of latent variables (Sønderby et al., 2016; Maaløe et al., 2016 Maaløe et al., , 2019 , a mixture of priors (Tomczak and Welling, 2017) or discrete distributions (van den Oord et al., 2017; Razavi et al., 2019; Rolfe, 2016; Vahdat et al., 2018b,a; Sadeghi et al., 2019) .
Current state-of-the-art deep learning models are trained on web-scaled datasets and increasing the number of parameters has proven to be a way to yield remarkable results (Radford et al., 2019) .
Nonetheless, time complexity and GPU memory are scarce resources, and the need for both resources increases linearly with the depth of neural network.
Li et al. (2016) and Lample et al. (2019) showed that large memory layers are an effective way to increase the capacity of a model while reducing the computation time.
Bornschein et al. (2017) showed that discrete variational distributions are analogous to neural memory (Graves et al., 2016) , which can be used to improve generative models (Li et al., 2016; Lample et al., 2019) .
Also, memory values are yet another way to embed data, allowing for applications such as one-shot transfer learning (Rezende et al., 2016) and semi-supervised learning that scales (Jang et al., 2016) .
Depth promises to bring VAEs to the next frontier (Maaløe et al., 2019) .
However, the available computing resources may shorten that course.
Motivated by the versatility and the scalability of discrete distributions, we introduce the Hierarchical Discrete Variational Autoencoder.
HD-VAE is a VAE with a hierarchy of factorized categorical latent variables.
In contrast to the existing discrete latent variable methods, our model
(a) is hierarchical,
(b) trained using Concrete/Gumbel-Softmax,
(c) relies on a conditional prior that is learned end-to-end and
(d) uses a variational distribution that is parameterized as a large stochastic memory layer.
Despite being optimized for a biased surrogate objective we show that a shallow HD-VAE outperforms the baseline Gaussian-based models on multiple binary images datasets in terms of test log-likelihood.
This motivates us to introduce a definition of the relaxation bias and to measure how it is affected by the configuration of latent variables.
In this preliminary research, we have introduced a design for variational memory layers and shown that it can be exploited to build hierarchical discrete VAEs, that outperform Gaussian prior VAEs.
However, without explicitly constraining the model, the relaxation bias grows with the number of latent layers, which prevents us from building deep hierarchical models that are competitive with state-of-the-art methods.
In future work we will attempt to harness the relaxed-ELBO to improve the performance of the HD-VAE further.
Optimization During training, we mitigate the posterior collapse using the freebits (Kingma et al., 2016) strategy with λ = 2 for each stochastic layer.
A dropout of 0.5 is used to avoid overfitting.
We linearly decrease the temperature τ from 0.8 to 0.3 during the first 2 · 10 5 steps and from 0.3 to 0.1 during the next 2 · 10 5 steps.
We use the Adamax optimizer (Kingma and Ba, 2014) with initial learning rate of 2 · 10 −3 for all parameters except for the memory values that are trained using a learning rate of 2 · 10 −2 to compensate for sparsity.
We use a batch size of 128.
All models are trained until they overfit and we evaluate the log-likelihood using 1000 importance weighted samples (Burda et al., 2015) .
Despite its large number of parameters, HD-VAE seems to be more robust to overfitting, which may be explained by the sparse update of the memory values.
Runtime Sparse CUDA operations are currently not used, which means there is room to make HD-VAE more memory efficient.
Even during training, one may truncate the relaxed samples to benefit from the sparse optimizations.
The table 3 shows the average elapsed time training iteration as well as the memory usage for a 6 layers LVAE with 6 × 16 stochastic units and K = 16 2 and batch size of 128.
Table 4 : Measured one-importance-weighted ELBO on binarized MNIST for a LVAE model with different number of layers and different numbers of stochastic units using relaxed (τ = 0.1) and hard samples (τ = 0).
We report N = L l=1 n l , where n l relates to the number of latent variables at the layer l and we set K = 256 for all the variables.
Let x be an observed variable, and consider a VAE model with one layer of N categorical latent variables z = {z 1 , . . . , z N } each with K classes.
The generative model is p θ (x, z) and the inference model is q φ (z|x).
For a temperature parameter τ > 0, the equivalent relaxed concrete variables are denotedẑ = {ẑ 1 , . . . ,ẑ N },ẑ i ∈ [0, 1] K .
We define H = one hot • arg max and
Following Tucker et al. (2017), using the Gumbel-Max trick, one can notice that
We now assume that f θ,φ,x is κ-Lipschitz for L 2 .
Then, by definition,
The relaxation bias can therefore be bounded as follows:
Furthermore, we can define the adjusted Evidence Lower Bound for relaxed categorical variables (relaxed-ELBO):
As shown by the experiment presented in the section 4.2, the quantity L τ >0
1 (θ, φ) appears to be a positive quantity.
Furthermore, as the model attempts to exploit the relaxation of z to maximize the surrogate objective, one may consider that
is a tight bound of δ τ (θ, φ), meaning that the relaxed-ELBO is a tight lower bound of the ELBO.
The relaxed-ELBO is differentiable and may enable automatic control of the temperature as left and right terms of the relaxed-ELBO seek respectively seek for high and low temperature.
κ-Lipschitz neural networks can be designed using Weight Normalization (Salimans and Kingma, 2016) or Spectral Normalization (Miyato et al., 2018) .
Nevertheless handling residual connections and multiple layers of latent variables is not trivial.
We note however that in the case of a one layer VAE, one only needs to constrain the VAE decoder to be κ-Lispchitz as the surrogate objective is computed as
In the appendix E, we show how the relaxed-ELBO can be extended to multiple layers of latent variables in the LVAE setting.
Appendix D. Defining f θ,φ on the domain of the relaxed Categorical Variablesz f θ,φ is only defined for categorical samples.
For relaxed samplesz, we define f θ,φ as:
.
The introduction of the function H is necessary as the terms
(b) and
(c) are only defined for categorical samples.
This expression remains valid for hard samplesz.
During training, relaxing the expressions
(b) and
(c) can potentially yield gradients of lower variance.
In the case of a single categorical variable z described by the set of K class probabilities π = {π 1 , ...π K }.
One can define:
Alternatively, asides from being a relaxed Categorical distribution, the Concrete/GumbelSoftmax also defines a proper continuous distribution.
When treated as such, this results in a proper probabilistic model with continuous latent variables, and the objective is unbiased.
In that case, the density is given by | In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:659 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget.
Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets.
In this work we investigate width-based lookaheads over Stochastic Shortest paths (SSP).
We analyse why width-based algorithms perform poorly over SSP problems, and overcome these pitfalls proposing a method to estimate costs-to-go.
We formalize width-based lookaheads as an instance of the rollout algorithm, give a definition of width for SSP problems and explain its sample complexity.
Our experimental results over a variety of SSP benchmarks show the algorithm to outperform other state-of-the-art rollout algorithms such as UCT and RTDP.
Model-based lookahead algorithms provide the ability to autonomously solve a large variety of sequential decision making problems.
Lookaheads search for solutions by considering sequences of actions that can be made from the current state up to a certain time into the future.
For realworld applications decisions often need to be computed in real-time, requiring algorithms to perform with a restricted computational budget.
Limiting search in this way can result in considering states and trajectories which do not provide useful information.
To address this, lookaheads can be augmented with heuristics that estimate costs-to-go to prioritise states and trajectories, and have been shown to perform well where computation budgets are restricted BID8 .This
paper is concerned with Stochastic Shortest Path (SSP) problems which are often used to compare and evaluate search algorithms. We consider
the width-based family of planning algorithms, first introduced by BID15 , which aim to prioritise the exploration of novel areas of the state space. Two width-based
planners, Lipovetzky and Geffner's breadth-first search, IW(1), and the depth-first search, Rollout-IW(1) BID1 , are investigated on SSP problems. We first provide
the necessary background for SSP problems and width-based algorithms, while also formalising width-based algorithms as instances of the rollout algorithm BID4 . We then show the
motive to augment width-based lookaheads with cost estimates on SSP problems, define the width of SSP problems and propose a novel width-based algorithm that estimates costs-to-go by simulating a general base policy. Our experimental
study shows that the algorithm compares favourably to the original Rollout-IW(1) algorithm and to other state-of-the-art instances of the rollout algorithm.
MCTS approaches typically combine lookaheads and cost-togo approximations, along with statistical tests to determine what are the most promising directions and focus their sampling effort.
The width-based methods described in this paper do so too, but in ways which are, at first sight, orthogonal to existing strategies.
It remains an area of active research to map out exactly how the width-based methods described in this paper, and those elsewhere by BID11 too, provide alternatives to the limitations of existing MCTS approaches.
Having said this, there is no general theory guiding the design of MCTS algorithms BID4 , and to avoid generating ad-hoc, problem dependent solutions involuntarily it is important to follow strict protocols that alert of potential lack of statistical significance in results, while relying on a diverse set of benchmarks that can be both easily understood, and highlight limitations of existing state-of-theart methods and overcome them. | We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:66 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \emph{PowerSGD}.
The proposed PowerSGD method simply raises the stochastic gradient to a certain power $\gamma\in[0,1]$ during iterations and introduces only one additional parameter, namely, the power exponent $\gamma$ (when $\gamma=1$, PowerSGD reduces to SGD).
We further propose PowerSGD with momentum, which we term \emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods.
Experiments are conducted on popular deep learning models and benchmark datasets.
Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients.
PowerSGD is essentially a gradient modifier via a nonlinear transformation.
As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization.
Stochastic optimization as an essential part of deep learning has received much attention from both the research and industry communities.
High-dimensional parameter spaces and stochastic objective functions make the training of deep neural network (DNN) extremely challenging.
Stochastic gradient descent (SGD) (Robbins & Monro, 1951 ) is the first widely used method in this field.
It iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective evaluated on a mini-batch.
Based on SGD, other stochastic optimization algorithms, e.g., SGD with Momentum (SGDM) (Qian, 1999) , AdaGrad (Duchi et al., 2011) , RMSProp (Tieleman & Hinton, 2012) , Adam (Kingma & Ba, 2015) are proposed to train DNN more efficiently.
Despite the popularity of Adam, its generalization performance as an adaptive method has been demonstrated to be worse than the non-adaptive ones.
Adaptive methods (like AdaGrad, RMSProp and Adam) often obtain faster convergence rates in the initial iterations of training process.
Their performance, however, quickly plateaus on the testing data (Wilson et al., 2017) .
In Reddi et al. (2018) , the authors provided a convex optimization example to demonstrate that the exponential moving average technique can cause non-convergence in the RMSProp and Adam, and they proposed a variant of Adam called AMSGrad, hoping to solve this problem.
The authors provide a theoretical guarantee of convergence but only illustrate its better performance on training data.
However, the generalization ability of AMSGrad on test data is found to be similar to that of Adam, and a considerable performance gap still exists between AMSGrad and SGD (Keskar & Socher, 2017; Chen et al., 2018) .
Indeed, the optimizer is chosen as SGD (or with Momentum) in several recent state-of-the-art works in natural language processing and computer vision (Luo et al., 2018; Wu & He, 2018) , where in these instances SGD does perform better than adaptive methods.
Despite the practical success of SGD, obtaining sharp convergence results in the non-convex setting for SGD to efficiently escape saddle points (i.e., convergence to second-order stationary points) remains a topic of active research (Jin et al., 2019; Fang et al., 2019) .
Related Works: SGD, as the first efficient stochastic optimizer for training deep networks, iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective function evaluated on a mini-batch.
SGDM brings a Momentum term from the physical perspective, which obtains faster convergence speed than SGD.
The Momentum idea can be seen as a particular case of exponential moving average (EMA).
Then the adaptive learning rate (ALR) technique is widely adopted but also disputed in deep learning, which is first introduced by AdaGrad.
Contrast to the SGD, AdaGrad updates the parameters according to the square roots of the sum of squared coordinates in all the past gradients.
AdaGrad can potentially lead to huge gains in terms of convergence (Duchi et al., 2011) when the gradients are sparse.
However, it will also lead to rapid learning rate decay when the gradients are dense.
RMSProp, which first appeared in an unpublished work (Tieleman & Hinton, 2012) , was proposed to handle the aggressive, rapidly decreasing learning rate in AdaGrad.
It computes the exponential moving average of the past squared gradients, instead of computing the sum of the squares of all the past gradients in AdaGrad.
The idea of AdaGrad and RMSProp propelled another representative algorithm: Adam, which updates the weights according to the mean divided by the root mean square of recent gradients, and has achieved enormous success.
Recently, research to link discrete gradient-based optimization to continuous dynamic system theory has received much attention (Yuan et al., 2016; Mazumdar & Ratliff, 2018) .
While the proposed optimizer excels at improving initial training, it is completely complementary to the use of learning rate schedules (Smith & Topin, 2019; Loshchilov & Hutter, 2016) .
We will explore how to combine learning rate schedules with the PoweredSGD optimizer in future work.
While other popular techniques focus on modifying the learning rates and/or adopting momentum terms in the iterations, we propose to modify the gradient terms via a nonlinear function called the Powerball function by the authors of Yuan et al. (2016) .
In Yuan et al. (2016) , the authors presented the basic idea of applying the Powerball function in gradient descent methods.
In this paper, we
1) systematically present the methods for stochastic optimization with and without momentum;
2) provide convergence proofs;
3) include experiments using popular deep learning models and benchmark datasets.
Another related work was presented in Bernstein et al. (2018) , where the authors presented a version of stochastic gradient descent which uses only the signs of gradients.
This essentially corresponds to the special case of PoweredSGD (or PoweredSGDM) when the power exponential γ is set to 0.
We also point out that despite the name resemblance, the power PowerSign optimizer proposed in Bello et al. (2017) is a conditional scaling of the gradient, whereas the proposed PoweredSGD optimizer applies a component-wise trasformation to the gradient. | We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:660 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We aim to build complex humanoid agents that integrate perception, motor control, and memory.
In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision.
We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies.
The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment.
Supplementary video link: https://youtu.be/fBoir7PNxPk
In reinforcement learning (RL), a major challenge is to simultaneously cope with high-dimensional input and high-dimensional action spaces.
As techniques have matured, it is now possible to train high-dimensional vision-based policies from scratch to generate a range of interesting behaviors ranging from game-playing to navigation BID17 BID32 BID41 .
Likewise, for controlling bodies with a large number of degrees of freedom (DoFs), in simulation, reinforcement learning methods are beginning to surpass optimal control techniques.
Here, we try to synthesize this progress and tackle high-dimensional input and output at the same time.
We evaluate the feasibility of full-body visuomotor control by comparing several strategies for humanoid control from vision.Both to simplify the engineering of a visuomotor system and to reduce the complexity of taskdirected exploration, we construct modular agents in which a high-level system possessing egocentric vision and memory is coupled to a low-level, reactive motor control system.
We build on recent advances in imitation learning to make flexible low-level motor controllers for high-DoF humanoids.
The motor skills embodied by the low-level controllers are coordinated and sequenced by the high-level system, which is trained to maximize sparse task reward.Our approach is inspired by themes from neuroscience as well as ideas developed and made concrete algorithmically in the animation and robotics literatures.
In motor neuroscience, studies of spinal reflexes in animals ranging from frogs to cats have led to the view that locomotion and reaching are highly prestructured, enabling subcortical structures such as the basal ganglia to coordinate a motor repertoire; and cortical systems with access to visual input can send low complexity signals to motor systems in order to evoke elaborate movements BID7 BID1 BID9 .The
study of "movement primitives" for robotics descends from the work of BID16 . Subsequent
research has focused on innovations for learning or constructing primitives for control of movments BID15 BID20 ), deploying and sequencing them to solve tasks BID36 BID19 BID22 , and increasing the complexity of the control inputs to the primitives BID31 . Particularly
relevant to our cause is the work of BID21 in which primitives were coupled by reinforcement learning to external perceptual inputs.Research in the animation literature has also sought to produce physically simulated characters capable of distinct movements that can be flexibly sequenced. This ambition
can be traced to the virtual stuntman BID6 a) and has been advanced markedly in the work of Liu BID27 . Further recent
work has relied on reinforcement learning to schedule control policies known as "control fragments", each one able to carry out only a specialized short movement segment BID24 . In work to date
, such control fragments have yet to be coupled to visual input as we will pursue here. From the perspective
of the RL literature BID38 , motor primitives and control fragments may be considered specialized instantiations of "option" sub-policies.Our work aims to contribute to this multi-disciplinary literature by demonstrating concretely how control-fragment-like low-level movements can be coupled to and controlled by a vision and memory-based high-level controller to solve tasks. Furthermore, we demonstrate
the scalability of the approach to greater number of control fragments than previous works. Taken together, we demonstrate
progress towards the goal of integrated agents with vision, memory, and motor control.
In this work we explored the problem of learning to reuse motor skills to solve whole body humanoid tasks from egocentric camera observations.
We compared a range of approaches for reusing lowlevel motor skills that were obtained from motion capture data, including variations related to those presented in BID24 BID34 .
To date, there is limited learning-based work on humanoids in simulation reusing motor skills to solve new tasks, and much of what does exist is in the animation literature.
A technical contribution of the present work was to move past hand-designed observation features (as used in BID34 ) towards a more ecological observation setting: using a front-facing camera is more similar to the kinds of observations a real-world, embodied agent would have.
We also show that hierarchical motor skill reuse allowed us to solve tasks that we could not with a flat policy.
For the walls and go-to-target tasks, learning from scratch was slower and produced less robust behavior.
For the forage tasks, learning from scratch failed completely.
Finally, the heterogeneous forage is an example of task that integrates memory and perception.There are some other very clear continuities between what we present here and previous work.
For learning low-level tracking policies from motion capture data, we employed a manually specified similarity measure against motion capture reference trajectories, consistent with previous work BID26 BID34 .
Additionally, the low-level policies were time-indexed: they operated over only a certain temporal duration and received time or phase as input.
Considerably less research has focused on learning imitation policies either without a pre-specified scoring function or without time-indexing (but see e.g. ).
Compared to previous work using control fragments BID24 , our low-level controllers were built without a sampling-based planner and were parameterized as neural networks rather than linear-feedback policies.We also want to make clear that the graph-transition and steerable structured low-level control approaches require significant manual curation and design: motion capture clips must be segmented by hand, possibly manipulated by blending/smoothing clips from the end of one clip to the beginning of another.
This labor intensive process requires considerable skill as an animator; in some sense this almost treats humanoid control as a computer-aided animation problem, whereas we aim to treat humanoid motor control as an automated and data-driven machine learning problem.
We acknowledge that relative to previous work aimed at graphics and animation, our controllers are less graceful.
Each approach involving motion capture data can suffer from distinct artifacts, especially without detailed manual editing -the hand-designed controllers have artifacts at transitions due to imprecise kinematic blending but are smooth within a behavior, whereas the control fragments have a lesser but consistent level of jitter throughout due to frequent switching.
Methods to automatically (i.e. without human labor) reduce movement artifacts when dealing with large movement repertoires would be interesting to pursue.Moreover, we wish to emphasize that due to the human-intensive components of training structured low-level controllers, fully objective algorithm comparison with previous work can be somewhat difficult.
This will remain an issue so long as human editing is a significant component of the dominant solutions.
Here, we focused on building movement behaviors with minimal curation, at scale, that can be recruited to solve tasks.
Specifically, we presented two methods that do not require curation and can re-use low-level skills with cold-switching.
Additionally, these methods can scale to a large number of different behaviors without further intervention.We view this work as an important step toward the flexible use of motor skills in an integrated visuomotor agent that is able to cope with tasks that pose simultaneous perceptual, memory, and motor challenges to the agent.
Future work will necessarily involve refining the naturalness of the motor skills to enable more general environment interactions and to subserve more complicated, compositional tasks. | Solve tasks involving vision-guided humanoid locomotion, reusing locomotion behavior from motion capture data. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:661 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models.
By observing that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vector, we propose to decouple the two.
We introduce GaLU networks — networks in which each neuron is a product of a Linear Unit, defined by a weight vector which is being trained, with a Gate, defined by a different weight vector which is not being trained.
Generally speaking, given a base model and a simpler version of it, the two parameters that determine the quality of the simpler version are whether its practical performance is close enough to the base model and whether it is easier to analyze it theoretically.
We show that GaLU networks perform similarly to ReLU networks on standard datasets and we initiate a study of their theoretical properties, demonstrating that they are indeed easier to analyze.
We believe that further research of GaLU networks may be fruitful for the development of a theory of deep learning.
An artificial neuron with the ReLU activation function is the function f w (x) : R d → R such that f w (x) = max{x w, 0} = 1 x w≥0 · x w .The
latter formulation demonstrates that the parameter vector w has a dual role; it acts both as a filter or a gate that decides if the neuron is active or not, and as linear weights that control the value of the neuron if it is active. We
introduce an alternative neuron, called Gated Linear Unit or GaLU for short, which decouples between those roles. A
0 − 1 GaLU neuron is a function g w,u (x) : R d → R such that g w,u (x) = 1 x u≥0 · x w .(1
) GaLU neurons, and therefore GaLU networks, are at least as expressive as their ReLU counterparts, since f w = g w,w . On
the other hand, GaLU networks appear problematic from an optimization perspective, because the parameter u cannot be trained using gradient based optimization (since ∇ u g w,u (x) is always zero). In
other words, training GaLU networks with gradient based algorithms is equivalent to initializing the vector u and keeping it constant thereafter. A
more general definition of a GaLU network is given in section 2.The main claim of the paper is that GaLU networks are on one hand as effective as ReLU networks on real world datasets (section 3) while on the other hand they are easier to analyze and understand (section 4).
The standard paradigm in deep learning is to use neurons of the form σ x w for some differentiable non linear function σ : R → R. In this article we proposed a different kind of neurons, σ i,j · x w, where σ i,j is some function of the example and the neuron index that remains constant along the training.
Those networks achieve similar results to those of their standard counterparts, and they are easier to analyze and understand.To the extent that our arguments are convincing, it gives new directions for further research.
Better understanding of the one hidden layer case (from section 5) seems feasible.
And as GaLU and ReLU networks behave identically for this problem, it gives us reasons to hope that understanding the behavior of GaLU networks would also explain ReLU networks and maybe other non-linearities as well.
As for deeper network, it is also not beyond hope that GaLU0 networks would allow some better theoretical analysis than what we have so far. | We propose Gated Linear Unit networks — a model that performs similarly to ReLU networks on real data while being much easier to analyze theoretically. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:662 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training.
With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs.
However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty.
Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search (NADS).
Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures.
With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection.
We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods.
Detecting anomalous data is crucial for safely applying machine learning in autonomous systems for critical applications and for AI safety (Amodei et al., 2016) .
Such anomalous data can come in settings such as in autonomous driving NHTSA, 2017) , disease monitoring (Hendrycks & Gimpel, 2016) , and fault detection (Hendrycks et al., 2019b) .
In these situations, it is important for these systems to reliably detect abnormal inputs so that their occurrence can be overseen by a human, or the system can proceed using a more conservative policy.
The widespread use of deep learning models within these autonomous systems have aggravated this issue.
Despite having high performance in many predictive tasks, deep networks tend to give high confidence predictions on Out-of-Distribution (OoD) data (Goodfellow et al., 2015; Nguyen et al., 2015) .
Moreover, commonly used OoD detection approaches are prone to errors and even assign higher likelihoods to samples from other datasets (Lee et al., 2018; Hendrycks & Gimpel, 2016) .
Unlike common machine learning tasks such as image classification, segmentation, and speech recognition, there are currently no well established guidelines for designing architectures that can accurately screen out OoD data and quantify its uncertainty.
Such a gap in our knowledge makes Neural Architecture Search (NAS) a promising option to explore the better design of uncertaintyaware models (Elsken et al., 2018) .
NAS algorithms attempt to find an optimal neural network architecture for a specific task.
Existing efforts have primarily focused on searching for architectures that perform well on image classification or segmentation.
However, it is unclear whether architecture components that are beneficial for image classification and segmentation models would also lead to better uncertainty quantification and thereafter be effective for OoD detection.
Moreover, previous work on deep uncertainty quantification shows that ensembles can help calibrate OoD classifier based methods, as well as improve OoD detection performance of likelihood estimation models (Lakshminarayanan et al., 2017; Choi & Jang, 2018) .
Because of this, instead of a single best performing architecture for uncertainty awareness, one might consider a distribution of wellperforming architectures.
Along this direction, designing an optimization objective which leads to uncertainty-aware models is also not straightforward.
With no access to labels, unsupervised/self-supervised generative models which maximize the likelihood of in-distribution data become the primary tools for uncertainty quantification (Hendrycks et al., 2019a) .
However, these models counter-intuitively assign high likelihoods to OoD data (Nalisnick et al., 2019a; Choi & Jang, 2018; Hendrycks et al., 2019a; Shafaei et al.) .
Because of this, maximizing the log-likelihood is inadequate for OoD detection.
On the other hand, Choi & Jang (2018) proposed using the Widely Applicable Information Criterion (WAIC) (Watanabe, 2013) , a penalized log-likelihood score, as the OoD detection criterion.
However, the score was approximated using an ensemble of models that was trained on maximizing the likelihood and did not directly optimize the WAIC score.
To this end, we propose a novel Neural Architecture Distribution Search (NADS) framework to identify common building blocks that naturally incorporate model uncertainty quantification and compose good OoD detection models.
NADS is an architecture search method designed to search for a distribution of well-performing architectures, instead of a single best architecture by formulating the architecture search problem as a stochastic optimization problem.
Using NADS, we optimize the WAIC score of the architecture distribution, a score that was shown to be robust towards model uncertainty.
Such an optimization problem with a stochastic objective over a probability distribution of architectures is unamenable to traditional NAS optimization strategies.
We make this optimization problem tractable by taking advantage of weight sharing between different architectures, as well as through a parameterization of the architecture distribution, which allows for a continuous relaxation of the discrete search problem.
Using the learned posterior architecture distribution, we construct a Bayesian ensemble of deep models to perform OoD detection.
Finally, we perform multiple OoD detection experiments to show the efficacy of our proposed method.
Unlike NAS for common learning tasks, specifying a model and an objective to optimize for uncertainty estimation and outlier detection is not straightforward.
Moreover, using a single model may not be sufficient to accurately quantify uncertainty and successfully screen out OoD data.
We developed a novel neural architecture distribution search (NADS) formulation to identify a random ensemble of architectures that perform well on a given task.
Instead of seeking to maximize the likelihood of in-distribution data which may cause OoD samples to be mistakenly given a higher likelihood, we developed a search algorithm to optimize the WAIC score, a Bayesian adjusted estimation of the data entropy.
Using this formulation, we have identified several key features that make up good uncertainty quantification architectures, namely a simple structure in the shallower layers, use of information preserving operations, and a larger, more expressive structure with skip connections for deeper layers to ensure optimization stability.
Using the architecture distribution learned by NADS, we then constructed an ensemble of models to estimate the data entropy using the WAIC score.
We demonstrated the superiority of our method to existing OoD detection methods and showed that our method has highly competitive performance without requiring access to OoD samples.
Overall, NADS as a new uncertainty-aware architecture search strategy enables model uncertainty quantification that is critical for more robust and generalizable deep learning, a crucial step in safely applying deep learning to healthcare, autonomous driving, and disaster response.
A FIXED MODEL ABLATION STUDY | We propose an architecture search method to identify a distribution of architectures and use it to construct a Bayesian ensemble for outlier detection. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:663 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data.
In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier.
We observe that although intuitively the confidence that a convolutional neural network (CNN) has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well.
This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification.
To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images.
Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem.
Motivation.
As modern applications such as autonomous vehicles and video surveillance generate larger amount of image data, the discovery of outliers from such image data is becoming increasingly critical.
Examples of such image outliers include unauthorized personnel observed in a secret military base or unexpected objects encountered by self-driving cars on the road.
Capturing these outliers can prevent intelligence leaks or save human lives.State-of-the-Art.
Due to the exceptional success of deep learning over classical methods in computer vision, in recent years a number of works BID17 BID27 BID5 BID23 leverage the representation learning ability of a deep autoencoder or GAN BID7 for outlier detection.
Outliers are either detected by plugging in the learned representation into classical outlier detection methods or directly reported by employing the reconstruction error as the outlier score BID36 BID4 .
However, these approaches use a generic network that is not trained specifically for outlier detection.
Although the produced representation is perhaps effective in representing the common features of the "normal" data, it is not necessarily effective in distinguishing "outliers" from "inliers".
Recently, some works BID26 BID21 were proposed to solve this issue by incorporating the outlier detection objective actively into the learning process.
However, these approaches are all based on the one-class technique BID28 BID18 BID33 that learns a single boundary between outliers and inliers.
Although they perform relatively well when handling simplistic data sets such as MNIST, they perform poorly at supporting complex data sets with multiple "normal" classes such as CIFAR-10 ( BID11 ).
This is due to the difficulty in finding a separator that encompasses all normal classes yet none of the outliers.Proposed Approach and Contributions.
In this work we propose a novel image outlier detection (IOD) strategy that successfully detects image outliers from complex real data sets with multiple normal classes.
IOD unifies the core principles of cutting edge deep learning image classifiers BID7 and classical outlier detection within one framework.Classical outlier detection techniques BID3 BID9 BID1 consider an object as an outlier if its outlierness score is above a certain cutoff threshold ct.
Intuitively given a Convolutional Neural Network (CNN) BID12 ) trained using normal training data (namely, data without labeled outliers), the confidence that the CNN has that an image belongs to a particular class could be leveraged to measure the outlierness of the image.
This is based on the intuition that we expect a CNN to be less confident about an outlier compared to inlier objects, since outliers by definition are dissimilar from any normal class.
By using the confidence as an outlier score, IOD could separate outliers from all normal classes.
However, our experiments (Sec. 2) show that directly using the confidence produced by CNN to identify outliers in fact is not particularly effective.
This is because the requirements of accurately classifying images and correctly detecting the outlier images conflict with each other.
CNN achieves high accuracy in image classification because of its excellent generalization capability that enables a CNN to overcome the gap between the training and testing images.
However, the generalization capability jeopardizes the detection of outliers, because it increases the chance of erroneously assigning an outlier image to some class with high confidence to which actually it does not fit.We solve this problem by proposing a deep neural decision forest-based (DNDF) approach equipped with an information theory-based regularization function that leverages the strong bias of the classification decisions made within each single decision tree and the ensemble nature of the overall decision forest.
Further, we introduce a new architecture of the DNDF that ensures independence amongst the trees and in turn improves the classification accuracy.
Finally, we use a joint optimization strategy to train both the spit and leaf nodes of each tree.
This speeds up the convergence.We demonstrate the effectiveness of our outlierness measure, the deep neural forest-based approach, the regularization function, and the new architecture using benchmark datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN -with the accuracy higher than 0.9 at detecting outliers, while preserving the accuracy of multi-class classification.
In this work we propose a novel approach that effectively detects outliers from image data.
The key novelties include a general image outlier detection framework and effective outlierness measure that leverages the deep neural decision forest.
Optimizations such as new architecture that connects deep neural network and decision tree and regularization to penalize the large entropy routing decisions are also proposed to further enhance the outlier detection capacity of IOD.
In the future we plan to investigate how to make our approach work in multi-label classification setting. | A novel approach that detects outliers from image data, while preserving the classification accuracy of image classification | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:664 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources.
We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input.
This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning.
The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms.
We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting.
Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models.
Point-cloud stream forecasting aims at predicting the future values and/or locations of data streams generated by a geospatial point-cloud S, given sequences of historical observations (Shi & Yeung, 2018) .
Example data sources include mobile network antennas that serve the traffic generated by ubiquitous mobile services at city scale (Zhang et al., 2019b) , sensors that monitor the air quality of a target region (Cheng et al., 2018) , or moving crowds that produce individual trajectories.
Unlike traditional spatiotemporal forecasting on grid-structural data, like precipitation nowcasting (Shi et al., 2015) or video frame prediction (Wang et al., 2018) , point-cloud stream forecasting needs to operate on geometrically scattered sets of points, which are irregular and unordered, and encapsulate complex spatial correlations.
While vanilla Long Short-term Memories (LSTMs) have modest abilities to exploit spatial features (Shi et al., 2015) , convolution-based recurrent neural network (RNN) models, such as ConvLSTM (Shi et al., 2015) and PredRNN++ (Wang et al., 2018) , are limited to modeling grid-structural data, and are therefore inappropriate for handling scattered point-clouds.
: Different approaches to geospatial data stream forecasting: predicting over input data streams that are inherently grid-structured, e.g., video frames using ConvLSTMs (top); mapping of pointcloud input to a grid, e.g., mobile network traffic collected at different antennas in a city, to enable forecasting using existing neural network structures (middle); forecasting directly over point-cloud data streams using historical information (as above, but without pre-processing), as proposed in this paper (bottom).
permutations for the features (Li et al., 2018) .
Through this, the proposed PointCNN leverages spatial-local correlations of point clouds, irrespective of the order of the input.
Notably, although these architectures can learn spatial features of point-clouds, they are designed to work with static data, thus have limited ability to discover temporal dependencies.
We introduce CloudLSTM, a dedicated neural model for spatiotemporal forecasting tailored to pointcloud data streams.
The CloudLSTM builds upon the DConv operator, which performs convolution over point-clouds to learn spatial features while maintaining permutation invariance.
The DConv simultaneously predicts the values and coordinates of each point, thereby adapting to changing spatial correlations of the data at each time step.
DConv is flexible, as it can be easily combined with various RNN models (i.e., RNN, GRU, and LSTM), Seq2seq learning, and attention mechanisms. | This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:665 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge.
For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced.
We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space.
TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications.
With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding.
We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules.
The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.
Recently, learning distributed vector representations of multi-relational knowledge has become an active area of research (Bordes et al.; Nickel et al.; Kazemi & Poole; Wang et al.; Bordes et al.) .
These methods map components of a KG (entities and relations) to elements of R d and capture statistical patterns, regarding vectors close in distance as representing similar concepts.
However, they lack common sense knowledge which are essential for reasoning (Wang et al.; Guo et al.; Nickel & Kiela) .
For example, "parent" and "father" would be deemed similar by KG embeddings, but by common sense, "parent ⇒ father" yet not the other way around.
Thus, one focus of current research is to bring common sense rules to KG embeddings (Guo et al.; Wang et al.; Wei et al.( . Some
methods impose hard geometric constraints and embed asymmetric orderings of knowledge (Nickel & Kiela; Vendrov et al.; Vilnis et al.( . However
, they only embed hierarchy (unary Is_a relations), and cannot embed n-ary relations in KG's. Moreover
, their hierarchy learning is largely incompatible with conventional relational learning, because they put hard constraints on distance to represent partial ordering, which is a common metric of similarity/ relatedness in relational learning.
We propose TransINT, a new KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space.
TransINT restrict entities tied by a relation to be embedded to vectors in a particular region of R d included isomorphically to the order of relation implication.
For example, we map any entities tied by is_father_of to vectors in a region that is part of the region for is_parent_of; thus, we can automatically know that if John is a father of Tom, he is also his parent even if such a fact is missing in the KG.
Such embeddings are constructed by sharing and rank-ordering the basis of the linear subspaces where the vectors are required to belong.
Mathematically, a relation can be viewed as sets of entities tied by a constraint (Stoll) .
We take such a view on KG's, since it gives consistancy and interpretability to model behavior.
Furthermore, for the first time in KG embedding, we map sets of entitites under relation constraint to a continuous set of points (whose elements are entity vectors) -which learns relationships among not only individual entity vectors but also sets of entities.
We show that angles between embedded relation sets can identify semantic patterns and implication rules -an extension of the line of thought as in word/ image embedding methods such as Mikolov et al., Frome et al. to relational embedding.
Such mining is both limited and less interpretable if embedded sets are discrete (Vilnis et al.; Vendrov et al.) or each entitity itself is embedded to a region, not a member vector of it (Vilnis et al.) .
1 TransINT's such interpretable meta-learning opens up possibilities for explainable reasoning in applications such as recommender systems (Ma et al.) and question answering (Hamilton et al.
We presented TransINT, a new KG embedding method that embed sets of entities (tied by relations) to continuous sets in R d that are inclusion-ordered isomorphically to relation implications.
Our method achieved new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on the FB122 dataset, with boosted performance even on test instances that are not affected by rules.
We further propose and interpretable criterion for mining semantic similairty among sets of entities with TransINT. | We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:666 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Unsupervised domain adaptive object detection aims to learn a robust detector on the domain shift circumstance, where the training (source) domain is label-rich with bounding box annotations, while the testing (target) domain is label-agnostic and the feature distributions between training and testing domains are dissimilar or even totally different.
In this paper, we propose a gradient detach based Stacked Complementary Losses (SCL) method that uses detection objective (cross entropy and smooth l1 regression) as the primary objective, and cuts in several auxiliary losses in different network stages to utilize information from the complement data (target images) that can be effective in adapting model parameters to both source and target domains.
A gradient detach operation is applied between detection and context sub-networks during training to force networks to learn discriminative representations.
We argue that the conventional training with primary objective mainly leverages the information from the source-domain for maximizing likelihood and ignores the complement data in shallow layers of networks, which leads to an insufficient integration within different domains.
Thus, our proposed method is a more syncretic adaptation learning process.
We conduct comprehensive experiments on seven datasets, the results demonstrate that our method performs favorably better than the state-of-the-art methods by a large margin.
For instance, from Cityscapes to FoggyCityscapes, we achieve 37.9% mAP, outperforming the previous art Strong-Weak by 3.6%.
In real world scenarios, generic object detection always faces severe challenges from variations in viewpoint, background, object appearance, illumination, occlusion conditions, scene change, etc.
These unavoidable factors make object detection in domain-shift circumstance becoming a challenging and new rising research topic in the recent years.
Also, domain change is a widely-recognized, intractable problem that urgently needs to break through in reality of detection tasks, like video surveillance, autonomous driving, etc. (see Figure 2 ).
Revisiting Domain-Shift Object Detection.
Common approaches for tackling domain-shift object detection are mainly in two directions:
(i) training supervised model then fine-tuning on the target domain; or
(ii) unsupervised cross-domain representation learning.
The former requires additional instance-level annotations on target data, which is fairly laborious, expensive and time-consuming.
So most approaches focus on the latter one but still have some challenges.
The first challenge is that the representations of source and target domain data should be embedded into a common space for matching the object, such as the hidden feature space (Saito et al., 2019; Chen et al., 2018) , input space Cai et al., 2019) or both of them (Kim et al., 2019b) .
The second is that a feature alignment/matching operation or mechanism for source/target domains should be further defined, such as subspace alignment (Raj et al., 2015) , H-divergence and adversarial learning (Chen et al., 2018) , MRL (Kim et al., 2019b) , Strong-Weak alignment (Saito et al., 2019) , etc.
In general, our SCL is also a learning-based alignment method across domains with an end-to-end framework.
(a) Non-adapted
(b) CVPR'18 (Chen et al., 2018)
(c) CVPR'19 (Saito et al., 2019)
(d) SCL (Ours)
(e) Non-adapted
(f) CVPR'18 (Chen et al., 2018)
(g) CVPR'19 (Saito et al., 2019)
(h) SCL (Ours) Figure 1: Visualization of features from PASCAL to Clipart (first row) and from Cityscapes to FoggyCityscapes (second row) by t-SNE (Maaten & Hinton, 2008) .
Red indicates the source examples and blue is the target one.
If source and target features locate in the same position, it is shown as light blue.
All models are re-trained with a unified setting to ensure fair comparisons.
It can be observed that our feature embedding results are consistently much better than previous approaches on either dissimilar domains (PASCAL and Clipart) or similar domains (Cityscapes and FoggyCityscapes).
Our Key Ideas.
The goal of this paper is to introduce a simple design that is specific to convolutional neural network optimization and improves its training on tasks that adapt on discrepant domains.
Unsupervised domain adaptation for recognition has been widely studied by a large body of previous literature (Ganin et al., 2016; Long et al., 2016; Tzeng et al., 2017; Panareda Busto & Gall, 2017; Hoffman et al., 2018; Murez et al., 2018; Zhao et al., 2019; Wu et al., 2019) , our method more or less draws merits from them, like aligning source and target distributions with adversarial learning (domain-invariant alignment).
However, object detection is a technically different problem from classification, since we would like to focus more on the object of interests (local regions).
In this paper, we have addressed unsupervised domain adaptive object detection through stacked complementary losses.
One of our key contributions is gradient detach training, enabled by suppressing gradients flowing back to the detection backbone.
In addition, we proposed to use multiple complementary losses for better optimization.
We conduct extensive experiments and ablation studies to verify the effectiveness of each component that we proposed.
Our experimental results outperform the state-of-the-art approaches by a large margin on a variety of benchmarks.
Our future work will focus on exploring the domain-shift detection from scratch, i.e., without the pre-trained models like DSOD (Shen et al., 2017) , to avoid involving bias from the pre-trained dataset. | We introduce a new gradient detach based complementary objective training strategy for domain adaptive object detection. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:667 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains.
While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant.
Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem.
The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task.
While recent results in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem.
In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework.
We define connectivity in terms of two distinct non-linear transformation blocks.
One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network.
Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features.
To validate our approach, we employed facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include.
Experimental results show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches.
We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.
Over the past few years, convolutional neural networks (CNNs) have become the leading approach in many vision-related tasks BID12 .
By creating a hierarchy of increasingly abstract concepts, they can transform complex high-dimensional input images into simple low-dimensional output features.
Although CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, successively training them requires large amount of data.
Optimizing deep networks is tricky, not only because of problems like vanishing / exploding gradients BID8 or internal covariate shift BID9 , but also because they typically have many parameters to be learned (which can go up to 137 billions BID21 ).
While previous works have looked at networks pre-trained on a large image-based dataset as a starting point for their gradient descent optimization, others have considered improving generalization by casting their original single-task problem into a new multi-task learning (MTL) problem (see BID31 for a review).
As BID2 explained in his seminal work: "MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks".
Exploring new ways to efficiently gather more information from related tasks -the core contribution of our approach -can thus help a network to further improve upon its main task.The use of MTL goes back several years, but has recently proven its value in several domains.
As a consequence, it has become a dominant field of machine learning BID30 .
Although many early and influential works contributed to this field BID5 ), recent major advances in neural networks opened up opportunities for novel contributions in MTL.
Works on grasping BID17 , pedestrian detection BID24 , natural language processing BID14 , face recognition BID26 BID27 and object detection BID16 have all shown that MTL has been finally adopted by the deep learning (DL) community as a way to mitigate the lack of data, and is thus growing in popularity.MTL strategies can be divided into two major categories: hard and soft parameter sharing.
Hard parameter sharing is the earliest and most common strategy for performing MTL, which dates back to the original work of BID2 .
Approaches in this category generally share the hidden layers between all tasks, while keeping separate outputs.
Recent results in the DL community have shown that a central CNN with separate task-specific fully connected (FC) layers can successfully leverage domain-specific information BID18 BID17 BID27 .
Although hard parameter sharing reduces the risk of over-fitting BID1 , shared layers are prone to be overwhelmed by features or contaminated by noise coming from particular noxious related tasks .Soft
parameter sharing has been proposed as an alternative to alleviate this drawback, and has been growing in popularity as a potential successor. Approaches
in this category separate all hidden layers into task-specific models, while providing a knowledge sharing mechanism. Each model
can then learn task-specific features without interfering with others, while still sharing their knowledge. Recent works
using one network per task have looked at regularizing the distance between taskspecific parameters with a 2 norm BID4 or a trace norm BID25 , training shared and private LSTM submodules , partitioning the hidden layers into subspaces BID19 and regularizing the FC layers with tensor normal priors BID15 . In the domain
of continual learning, progressive network BID20 has also shown promising results for cross-domain sequential transfer learning, by employing lateral connections to previously learned networks. Although all
these soft parameter approaches have shown promising potential, improving the knowledge sharing mechanism is still an open problem.In this paper, we thus present the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a soft parameter sharing MTL framework. We contribute
with a novel knowledge sharing mechanism, dubbed the collaborative block, which implements connectivity in terms of two distinct non-linear transformations. One aggregates
task-specific features into global features, and the other merges back the global features into each task-specific network. We demonstrate
that our collaborative block can be dropped in any existing architectures as a whole, and can easily enable MTL for any approaches. We evaluated our
method on the problem of facial landmark detection in a MTL framework and obtained better results in comparison to other approaches of the literature. We further assess
the objectivity of our training framework by randomly varying the contribution of each related tasks, and finally give insights on how our collaborative block enables knowledge sharing with an ablation study on our DCNet.The content of our paper is organized as follows. We first describe
in Section 2 works on MTL closely related to our approach. We also describe
Facial landmark detection, our targeted application. Architectural details
of our proposed Multi-Task approach and its motivation are spelled out in Section 3. We then present in Section
4 a number of comparative results on this Facial landmark detection problem for two CNN architectures, AlexNet and ResNet18, that have been adapted with various MTL frameworks including ours. It also contains discussions
on an ablation study showing at which depth feature maps from other tasks are borrowed to improve the main task. We conclude our paper in Section
5.2 RELATED WORK 2.1 MULTI-TASK LEARNING Our proposed deep collaboration network (DCNet) is related to other existing approaches. The first one is the cross-stitch
(CS) BID16 ) network, which connects task-specific networks through linear combinations of the spatial feature maps at specific layers. One drawback of CS is that they are
limited to capturing linear dependencies only, something we address in our proposed approach by employing non-linearities when sharing feature maps. Indeed, non-linear combinations are
usually able to learn richer relationships, as demonstrated in deep networks. Another related approach is tasks-constrained
deep convolutional network (TCDCN) for facial landmarks detection . In it, the authors proposed an early-stopping
criterion for removing auxiliary tasks before the network starts to over-fit to the detriment of the main task. One drawback of their approach is that their
criterion has several hyper-parameters, which must all be selected manually. For instance, they define an hyper-parameter
controlling the period length of the local window and a threshold that stops the task when the criterion exceeds it, all of which can be specified for each task independently. Unlike TCDCN, our approach has no hyper-parameters
that depend on the tasks at hand, which greatly simplifies the training process. Our two transformation blocks consist of a series
of batch normalization, ReLU, and convolutional layers shaped in a standard setting based on recent advances in residual network (see Sec. 3). This is particularly useful for computationally expensive
deep networks, since integrating our proposed approach requires no additional hyper-parameter tuning experiments.Our proposed approach is also related to HyperFace BID18 . In this work, the authors proposed to fuse the intermediate
layers of AlexNet and exploit the hierarchical nature of the features. Their goal was to allow low-level features containing better
localization properties to help tasks such as landmark localization and pose detection, and allow more class-specific high-level features to help tasks like face detection and gender recognition. Although HyperFace uses a single shared CNN instead of task-specific
CNNs and is not entirely related to our approach, the idea of feature fusion is also central in our work. Instead of fusing the features at intermediate layers of a single CNN
, our approach aggregates same-level features of multiple CNNs, at different depth independently. Also, one drawback of HyperFace is that the proposed feature fusion
is specific to AlexNet, while our method is not specific to any network. In fact, our approach takes into account the vast diversity of existing
network architectures, since it can be added to any architecture without modification.
In this paper, we proposed the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a multi-task learning setting.
It implements feature connectivity and sharing through two distinct non-linear transformations inside a collaborative block, which also incorporates skip connection and residual mapping that are known for their good training behavior.
The first transformation aggregates the task-specific feature maps into a global feature map representing unified knowledge, and the second one merges it back into each task-specific network.
One key characteristic of our collaborative blocks is that they can be dropped in virtually any existing architectures, making them universal adapters to endow deep networks with multi-task learning capabilities.Our results on the MTFL, AFW and AFLW datasets showed that our DCNet outperformed several state-of-the-art approaches, including cross-stitch networks.
Our additional ablation study, using ResNet18 as underlying network, confirmed our intuition that the task-specific networks exploited the added flexibility provided by our approach.
Additionally, these task-specific networks successfully incorporated features having varying levels of abstraction.
Evaluating our proposed approach on other MTL problems could be an interesting avenue for future works.
For instance, the recurrent networks used to solve natural language processing problems could benefit from incorporating our novel method leveraging domain-information of related tasks. | We propose a novel approach for connecting task-specific networks in a multi-task learning setting based on recent residual network advances. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:668 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Zero-Shot Learning (ZSL) is a classification task where some classes referred as unseen classes have no labeled training images.
Instead, we only have side information (or description) about seen and unseen classes, often in the form of semantic or descriptive attributes.
Lack of training images from a set of classes restricts the use of standard classification techniques and losses, including the popular cross-entropy loss.
The key step in tackling ZSL problem is bridging visual to semantic space via learning a nonlinear embedding.
A well established approach is to obtain the semantic representation of the visual information and perform classification in the semantic space.
In this paper, we propose a novel architecture of casting ZSL as a fully connected neural-network with cross-entropy loss to embed visual space to semantic space.
During training in order to introduce unseen visual information to the network, we utilize soft-labeling based on semantic similarities between seen and unseen classes.
To the best of our knowledge, such similarity based soft-labeling is not explored for cross-modal transfer and ZSL.
We evaluate the proposed model on five benchmark datasets for zero-shot learning, AwA1, AwA2, aPY, SUN and CUB datasets, and show that, despite the simplicity, our approach achieves the state-of-the-art performance in Generalized-ZSL setting on all of these datasets and outperforms the state-of-the-art for some datasets.
Supervised classifiers, specifically Deep Neural Networks, need a large number of labeled samples to perform well.
Deep learning frameworks are known to have limitations in fine-grained classification regime and detecting object categories with no labeled data Socher et al., 2013; Zhang & Koniusz, 2018) .
On the contrary, humans can recognize new classes using their previous knowledge.
This power is due to the ability of humans to transfer their prior knowledge to recognize new objects (Fu & Sigal, 2016; Lake et al., 2015) .
Zero-shot learning aims to achieve this human-like capability for learning algorithms, which naturally reduces the burden of labeling.
In zero-shot learning problem, there are no training samples available for a set of classes, referred to as unseen classes.
Instead, semantic information (in the form of visual attributes or textual features) is available for unseen classes (Lampert et al., 2009; 2014) .
Besides, we have standard supervised training data for a different set of classes, referred to as seen classes along with the semantic information of seen classes.
The key to solving zero-shot learning problem is to leverage trained classifier on seen classes to predict unseen classes by transferring knowledge analogous to humans.
Early variants of ZSL assume that during inference, samples are only from unseen classes.
Recent observations Scheirer et al., 2013; realize that such an assumption is not realistic.
Generalized ZSL (GZSL) addresses this concern and considers a more practical variant.
In GZSL there is no restriction on seen and unseen classes during inference.
We are required to discriminate between all the classes.
Clearly, GZSL is more challenging because the trained classifier is generally biased toward seen classes.
In order to create a bridge between visual space and semantic attribute space, some methods utilize embedding techniques (Palatucci et al., 2009; Romera-Paredes & Torr, 2015; Socher et al., 2013; Bucher et al., 2016; Xu et al., 2017; Zhang et al., 2017; Simonyan & Zisserman, 2014; Xian et al., 2016; Zhang & Saligrama, 2016; Al-Halah et al., 2016; Zhang & Shi, 2019; Atzmon & Chechik, 2019) and the others use semantic similarity between seen and unseen classes (Zhang & Saligrama, 2015; Mensink et al., 2014) .
Semantic similarity based models represent each unseen class as a mixture of seen classes.
While the embedding based models follow three various directions; mapping visual space to semantic space (Palatucci et al., 2009; Romera-Paredes & Torr, 2015; Socher et al., 2013; Bucher et al., 2016; Xu et al., 2017; Socher et al., 2013) , mapping semantic space to the visual space (Zhang et al., 2017; Shojaee & Baghshah, 2016; Ye & Guo, 2017) , and finding a latent space then mapping both visual and semantic space into the joint embedding space Simonyan & Zisserman, 2014; Xian et al., 2016; Zhang & Saligrama, 2016; Al-Halah et al., 2016) .
The loss functions in embedding based models have training samples only from the seen classes.
For unseen classes, we do not have any samples.
It is not difficult to see that this lack of training samples biases the learning process towards seen classes only.
One of the recently proposed techniques to address this issue is augmenting the loss function with some unsupervised regularization such as entropy minimization over the unseen classes .
Another recent methodology which follows a different perspective is deploying Generative Adversarial Network (GAN) to generate synthetic samples for unseen classes by utilizing their attribute information Zhu et al., 2018; Xian et al., 2018) .
Although generative models boost the results significantly, it is difficult to train these models.
Furthermore, the training requires generation of large number of samples followed by training on a much larger augmented data which hurts their scalability.
The two most recent state-of-the-art GZSL methods, CRnet (Zhang & Shi, 2019) and COSMO (Atzmon & Chechik, 2019) , both employ a complex mixture of experts approach.
CRnet is based on k-means clustering with an expert module on each cluster (seen class) to map semantic space to visual space.
The output of experts (cooperation modules) are integrated and finally sent to a complex loss (relation module) to make a decision.
CRnet is a multi-module (multi-network) method that needs end-to-end training with many hyperparameters.
Also COSMO is a complex gating model with three modules: a seen/unseen classifier and two expert classifiers over seen and unseen classes.
Both of these methods have many modules, and hence, several hyperparameters; architectural, and learning decisions.
A complex pipeline is susceptible to errors, for example, CRnet uses k-means clustering for training and determining the number of experts and a weak clustering will lead to bad results.
Our Contribution: We propose a simple fully connected neural network architecture with unified (both seen and unseen classes together) cross-entropy loss along with soft-labeling.
Soft-labeling is the key novelty of our approach which enables the training data from the seen classes to also train the unseen class.
We directly use attribute similarity information between the correct seen class and the unseen classes to create a soft unseen label for each training data.
As a result of soft labeling, training instances for seen classes also serve as soft training instance for the unseen class without increasing the training corpus.
This soft labeling leads to implicit supervision for the unseen classes that eliminates the need for any unsupervised regularization such as entropy loss in .
Soft-labeling along with crossentropy loss enables a simple MLP network to tackle GZSL problem.
Our proposed model, which we call Soft-labeled ZSL (SZSL), is simple (unlike GANs) and efficient (unlike visual-semantic pairwise embedding models) approach which achieves the state-of-the-art performance in Generalized-ZSL setting on all five ZSL benchmark datasets and outperforms the state-of-the-art for some of them.
We proposed a discriminative GZSL classifier with visual-to-semantic mapping and cross-entropy loss.
During training, while SZSL is trained on a seen class, it simultaneously learns similar unseen classes through soft labels based on semantic class attributes.
We deploy similarity based soft labeling on unseen classes that allows us to learn both seen and unseen signatures simultaneously via a simple architecture.
Our proposed soft-labeling strategy along with cross-entropy loss leads to a novel regularization via generalized similarity-based weighted cross-entropy loss that can successfully tackle GZSL problem.
Soft-labeling offers a trade-off between seen and unseen accuracies and provides the capability to adjust these accuracies based on the particular application.
We achieve state-of-the-art performance, in GZSL setting, on all five ZSL benchmark datasets while keeping the model simple, efficient and easy to train. | How to use cross-entropy loss for zero shot learning with soft labeling on unseen classes : a simple and effective solution that achieves state-of-the-art performance on five ZSL benchmark datasets. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:669 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep Neural Networks (DNNs) are known for excellent performance in supervised tasks such as classification.
Convolutional Neural Networks (CNNs), in particular, can learn effective features and build high-level representations that can be used for
classification, but also for querying and nearest neighbor search.
However, CNNs have also been shown to suffer from a performance drop when the distribution of the data changes from training to test data.
In this paper we analyze the internal
representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data.
More importantly, this difference is more extreme if the unseen data comes from a shifted distribution.
Based on this observation, we objectively evaluate the degree of representation’s variance in each class by applying eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour.
This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class.
We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results.
To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN’s representation, improving performance on unseen test data from a shifted distribution.
We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017).
We demonstrate that not only does DWCCA significantly improve the network’s internal representation, it
also increases the end-to-end classification accuracy, especially when the test set exhibits a slight distribution shift.
By adding DWCCA to a VGG neural network, we achieve around 6 percentage points improvement in the case of a distribution
mismatch.
Convolutional Neural Networks (CNNs) are the state of the art in many supervised learning tasks such as classification, and using the power of convolutional layers, CNNs can learn useful features that are often superior to engineered features, and build internal representations that can achieve high classification performance.It has been shown that CNNs have a surprising ability to fit data, so much so that they can even perfectly learn from data with random labels BID32 .
But of course, memorising the training data is not sufficient: a model is expected to generalize to unseen data points.
Additionally, a robust model has to be able to not only deal with unseen data points that are similar to the training set, but also cope with unseen data points that may come from a slightly different distribution than the training data (distribution mismatch).
When there is a distribution shift between the training and test sets, robustness of the model's representation becomes more important as it has to classify or embed data points that are quite different from the ones it has observed in the training set.In this paper, we investigate this by using a well-known DNN architecture (VGG BID28 ) that is adapted for audio classification BID9 and is widely used among researchers.
We evaluate VGG on data with as well as without distribution mismatch and observe that while VGG exhibits a reasonable performance on the data without distribution mismatch, its performance significantly drops when tested on data from a shifted distribution.We start by analyzing the internal representations of the network by using visualisations.
As will be seen in the first (a-c) and the 3rd rows (g-i) of FIG2 , the network's internal representations in each class spread more in the embedding space for the unseen data (validation or test) compared to the training data.
This is even more extreme when the unseen data comes from a shifted distribution (i).For
an objective evaluation of the amount of the representation's variance in each class, we compute the within-class covariance of the representations of the network for each class, and we apply eigenvalue decomposition to compute the eigenvalues of each class's covariance matrix. We
then report the sorted eigenvalues of the within-class covariance of the representations in Figure 3 . As
the blue curves show, the eigenvalues in unseen data of validation (b and e) and test (c and d)
have considerably higher ranges than train data (a and d)
for all the datasets we used.To better understand the effect of such high variance in the quality of generalisation in the representations of our network, we carry out K-nearest neighbor (KNN) experiments on the dataset without, and the dataset with distribution shift. As
the results in Figure 4 show, the performance degredation from validation (c
) compared to test (
d) in case of distribution mismatch is significantly higher compared to the performance drop from validation
(a) to test
(b) when the test data comes from a similar distribution.
This observation is also aligned with what we observed in the visualisations from FIG2 that showed the data is more spread than validation data, when coming from a shifted distribution.To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that reformulates the conventional Within-Class Covariance Normalization (WCCN) BID12 as a DNN-compatible version.
DWCCA is trained end-to-end using back-propagation, can be placed in any arbitrary position in a DNN, and is capable of significantly reducing the within-class covariance of the internal representation in a DNN.We empirically show that DWCCA significantly reduces the within-class covariance of the DNN's representations, in both cases.
Further, we evaluate the generalization quality of the DNN's representations after applying DWCCA by performing nearest neighbor classification on its representations.
Our results show that DWCCA significantly improves the nearest neighbor classification results in both cases, hence improving the generalization quality of the representations.
And finally we report the end-to-end classification results of the trained models on an acoustic scene classification task, using data from the annual IEEE Challenges on Detection and Classification of Acoustic Scenes and Events (DCASE).
It turns out that the classification results for the dataset with distribution shift are significantly improved by integrating the DWCCA layer, while the performance on the dataset without distribution mismatch stayed the same.
In FIG2 , the network's internal representations in each class are projected into 2D via PCA and each class is represented by a different colour.
Looking at first (a-c) and second (d-f) row, it can be seen that for the dataset without mismatched distribution the embeddings of unseen data (validation and test) are spread less after applying DWCCA.
Also comparing the unseen embeddings to the training embeddings (with lower opacity and in grey) it can be seen that the unseen embeddings projected closer to the training embeddings after applying DWCCA.
Comparing third (g-i) and fourth (j-l) row, it can be seen that for the case of a distribution shift DWCCA also reduces the variance of the embeddings in each class, resulting in them being embedded closer to the training embeddings (grey).
This suggests that this property can improve the generalisation of the representations.
We will empirically evaluate this hypothesis later in this section by applying KNN classification on the representations.
Looking at Figure 3 , we can see that in all plots from dataset with, and dataset without distribution shift, DWCCA significantly reduces the within-class variability.
This can be observed by looking at the eigenvalues of the covariances of the representations.
An interesting observation is the range of eigenvalues in vanilla: In both datasets, eigenvalues have significantly larger range on unseen data (validation and test) compared to the training data.
The maximum eigenvalue in DCASE2016 is around 0.7, while the maximum eigenvalue for unseen is around 7, about 10 times more.
Also the maximum eigenvalue of the train set of DCASE2017 is around 2, while the max.
eigenvalue on unseen data is around 20 (10 times larger).By
looking at the KNN results in Fig. 4 it can be seen that in both cases (mismatch / no mismatch), the KNN classification accuracy increases by adding DWCCA. Also
, while the KNN performance is in a reasonable range on the validation set of both datasets, the test accuraty in the mismatch case (DCASE2017) drops significantly compared to the validation set. Additionally
it can be seen that applying DWCCA significantly improves the performance on the test set with shifted distribution, adding an improvement of about 6 percentage point, while the improvement on the test set without mismatch is around 2 percentage points. Looking at the
results of end-to-end classifications in TAB2 , we see that the performance of vanilla on DCASE 2017 consistently and significantly improves when adding DWCCA, on all development folds as well as on the unseen test data. We observe around
6 percentage points improvement by adding DWCCA to VGG.Looking at the results of the dataset without mismatch, we see that although the results on all folds were improved by adding DWCCA, the results on the unseen test set do not significantly change. This can be explained
better by looking at FIG2 : the embeddings of validation (b) and test (c) indicate
that the test
data is projected closer to the training set than the validation set. This observation suggests
that the unseen test in DCASE2016 might be similar (even more similar than the validation data) to the training set. This can also be confirmed
by looking at the results of the best CNN baseline, as well as vanilla: the performances on the unseen test set are consistently higher than all the validation folds. Hence, DWCCA could not help
as there was not a large generalisation gap between training and test.It is worth mentioning that both vanilla and DWCCA are single models, trained on mono single channel spectrograms and no ensemble or multi-channel features were used in these experiments. In other words, a single VGG
model achieves comparable performances to an ensemble of multi-channel Resnets. We also provide class-wise f-measures
on the unseen test set for both datasets in TAB3 . While on the dataset without distribution
shift, the average f1 stays the same by adding DWCCA in both calibrated and non calibrated models, we can observe that there is a boost of 13 percentage points on the "train" class which was the class with the lowest f1 (both calibrated and non calibrated). It seems that DWCCA does not have a significant
impact on classes with high f1: "office" and "beach" which stay the highest correctly predicted classes and do not face significant changes by DWCCA.On the dataset with distribution shift, we can see a significant improvement of 4 and 7 percentage points on average f1 for non-calibrated and calibrated models, respectively. The worst class in DCASE2017 was "beach" with 32
%, which was boosted by 24 and 37 percentage points for noncalibrated and calibrated models, respectively. On the other hand, the best performing class, "
forest path", drops by only 2 and 3 percentage points for non-calibrated and calibrated models, respectively.From the experimental results, we may thus conclude that overall, reducing the within-class covariance of representations using DWCCA results in more robust performance and, in case of a large gap between training and test, DWCCA can improve the generalisation. Additionally, the networks tend to reach a more
uniform performance across various classes by improving the performance on the worst classes while not significantly degrading the best performing classes.
In this paper, we presented the DWCCA layer, a DNN compatible version of the classic WCCN which is used to normalize the within-class covariance of the DNN's representation and improve the performance of CNNs on data-points with shifted distributions.
Using DWCCA, we improved the performance of the VGG network by around 6 percentage point when the test datapoints were from a shifted distribution.
We analysed the embedding's generalisation by reporting KNN classification accuracies and showed that DWCCA also improves the generalisation of DNN representations both for with and without distribution mismatch.
We also showed that large within-class covariance of representations can be a sign for bad generalisation and showed that DWCCA can significantly reduce WCC and improve generalisation of the representations. | We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:67 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress.
In this work, we use a curriculum of progressively growing action spaces to accelerate learning.
We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space.
Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task.
We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.
The value of curricula has been well established in machine learning, reinforcement learning, and in biological systems.
When a desired behaviour is sufficiently complex, or the environment too unforgiving, it can be intractable to learn the behaviour from scratch through random exploration.
Instead, by "starting small" (Elman, 1993) , an agent can build skills, representations, and a dataset of meaningful experiences that allow it to accelerate its learning.
Such curricula can drastically improve sample efficiency (Bengio et al., 2009 ).
Typically, curriculum learning uses a progression of tasks or environments.
Simple tasks that provide meaningful feedback to random agents are used first, and some schedule is used to introduce more challenging tasks later during training (Graves et al., 2017) .
However, in many contexts neither the agent nor experimenter has such unimpeded control over the environment.
In this work, we instead make use of curricula that are internal to the agent, simplifying the exploration problem without changing the environment.
In particular, we grow the size of the action space of reinforcement learning agents over the course of training.
At the beginning of training, our agents use a severely restricted action space.
This helps exploration by guiding the agent towards rewards and meaningful experiences, and provides low variance updates during learning.
The action space is then grown progressively.
Eventually, using the most unrestricted action space, the agents are able to find superior policies.
Each action space is a strict superset of the more restricted ones.
This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces.
However, such a hierarchy is often easy to find.
Continuous action spaces can be discretised with increasing resolution.
Similarly, curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate.
For example, in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion.
Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space.
We propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning.
Since data from exploration using a restricted action space is still valid in the Markov Decision Processes (MDPs) corresponding to the less restricted action spaces, we can learn value functions in the less restricted action space with 'off-action-space' data collected by exploring in the restricted action space.
In our approach, we learn value functions corresponding to each level of restriction simultaneously.
We can use the relationships of these value functions to each other to accelerate learning further, by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces, as well as sharing learned state representations.
Empirically, we first demonstrate the efficacy of our approach in two simple control tasks, in which the resolution of discretised actions is progressively increased.
We then tackle a more challenging set of problems with combinatorial action spaces, in the context of StarCraft micromanagement with large numbers of agents (50-100).
Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate, we use hierarchical clustering to impose a restricted action space on the agents.
Agents in a cluster are restricted to take the same action, but we progressively increase the number of groups that can act independently of one another over the course of training.
Our method substantially improves sample efficiency on a number of tasks, outperforming learning any particular action space from scratch, a number of ablations, and an actor-critic baseline that learns a single value function for the behaviour policy, as in the work of Czarnecki et al. (2018) .
Code is available, but redacted here for anonymity.
We also compare against a Mix&Match (MM) baseline using the actor-critic approach of Czarnecki et al. (2018) , but adapted for our new multi-agent setting and supporting a third level in the mixture Figure 3 : StarCraft micromanagement with growing action spaces.
We report the mean and standard error (over 5 random seeds) of the evaluation winrate during training, with a moving average over the past 500 episodes.
of policies (A 0 , A 1 , A 2 ).
We tuned hyperparameters for all algorithms on the easiest, fastesttraining scenario (80 marines vs. 80 marines).
On this scenario, MM learns faster but plateaus at the same level as GAS(2).
MM underperforms on all other scenarios to varying degrees.
Learning separate value functions for each A , as in our approach, appears to accelerate the transfer learning in the majority of settings.
Another possible explanation is that MM may be more sensitive to hyperparameters.
We do not use population based training to tune hyperparameters on the fly, which could otherwise help MM adapt to each scenario.
However, GAS would presumably also benefit from population based training, at the cost of further computation and sample efficiency.
The policies learned by GAS exhibit good tactics.
Control of separate groups is used to position our army so as to maximise the number of attacking units by forming a wall or a concave that surrounds the enemy, and by coordinating a simultaneous assault.
Figure 5 in the Appendix shows some example learned policies.
In scenarios where MM fails to learn well, it typically falls into a local minimum of attacking head-on.
In each scenario, we test an ablation GAS (2): ON-AC that does not use our off-action-space update, instead training each level of the Q-function only with data sampled at that level.
This ablation performs somewhat worse on average, although the size of the impact varies in different scenarios.
In some tasks, it is beneficial to accelerate learning for finer action spaces using data drawn from the off-action-space policy.
In Appendix A.1.1, the same ablation shows significantly worse performance on the Mountain Car task and comparable performance on Acrobot.
We present a number of further ablations on two scenarios.
The most striking failure is of the 'SEP-Q' variant which does not compose the value function as a sum of scores in the hierarchy.
It is critical to ensure that values are well-initialised as we move to less restricted action spaces.
In the discretised continuous control tasks, 'SEP-Q' also underperforms, although less dramatically.
The choice of target is less important: performing a max over coarser action spaces to construct the target as described in Section 4.2 does not improve learning speed as intended.
One potential reason is that maximising over more potential targets increases the maximisation bias already present in Q-learning (Hasselt, 2010).
Additionally, we use an n-step objective which combines a partial onpolicy return with the bootstrap target, which could reduce the relative impact of the choice of target.
Finally, we experiment with a higher .
Unfortunately, asymptotic performance is degraded slightly once we use A 3 or higher.
One potential reason is that it decreases the average group size, pushing against the limits of the spatial resolution that may be captured by our CNN architecture.
Higher increases the amount of time that there are fewer units than groups, leaving certain groups empty and rendering our masked pooling operation degenerate.
We do not see a fundamental limitation that should restrict the further growth of the action space, although we note that most hierarchical approaches in the literature avoid too many levels of depth.
For example, Czarnecki et al. (2018) only mix between two sizes of action spaces rather than the three we progress through in the majority of our GAS experiments.
In this work, we presented an algorithm for growing action spaces with off-policy reinforcement learning to efficiently shape exploration.
We learn value functions for all levels of a hierarchy of restricted action spaces simultaneously, and transfer data, value estimates, and representations from more restricted to less restricted action spaces.
We also present a strategy for using this approach in cooperative multi-agent control.
In discretised continuous control tasks and challenging multiagent StarCraft micromanagement scenarios, we demonstrate empirically the effectiveness of our approach and the value of off-action-space learning.
An interesting avenue for future work is to automatically identify how to restrict action spaces for efficient exploration, potentially through meta-optimisation.
We also look to explore more complex and deeper hierarchies of action spaces. | Progressively growing the available action space is a great curriculum for learning agents | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:670 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes.
It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters
. Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples
.
The cat-and-mouse game nature of attacks and defenses raises the question of the presence of equilibria in the dynamics
.
In this paper, we present a neural-network based attack class to approximate a larger but intractable class of attacks, and
formulate the attacker-defender interaction as a zero-sum leader-follower game.
We present sensitivity-penalized optimization algorithms to find minimax solutions, which are the best worst-case defenses against whitebox attacks.
Advantages of the learning-based attacks and defenses compared to gradient-based attacks and defenses are demonstrated with MNIST and CIFAR-10.
Recently, researchers have made an unsettling discovery that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes BID24 BID7 .
Following studies tried to explain the cause of the seeming failure of deep learning toward such adversarial examples.
The vulnerability was ascribed to linearity BID24 , low flexibility BID4 , or the flatness/curvedness of decision boundaries BID20 , but a more complete picture is still under research.
This is troublesome since such a vulnerability can be exploited in critical situations such as an autonomous car misreading traffic signs or a facial recognition system granting access to an impersonator without being noticed.
Several methods of generating adversarial examples were proposed BID7 BID19 BID2 , most of which use the knowledge of the classifier to craft examples.
In response, a few defense methods were proposed: retraining target classifiers with adversarial examples called adversarial training BID24 BID7 ; suppressing gradient by retraining with soft labels called defensive distillation BID22 ; hardening target classifiers by training with an ensemble of adversarial examples BID25 .In
this paper we focus on whitebox attacks, that is, the model and the parameters of the classifier are known to the attacker. This
requires a more robust classifier or defense method than simply relying on the secrecy of the parameters as defense. When
the classifier parameters are known to an attacker, existing attack methods are very successful at fooling the classifiers. Conversely
, when the attack is known to the classifier, e.g., in the form of adversarial examples, one can weaken the attack by retraining the classifier with adversarial examples, called adversarial training. However,
if we repeat adversarial sample generation and adversarial training back-to-back, it is observed that the current adversarially-trained classifier is no longer robust to previous attacks (see Sec. 3.1.) To find the classifier robust against the class of gradient-based attacks, we first propose a sensitivitypenalized optimization procedure. Experiments
show that the classifier from the procedure is more robust than adversarially-trained classifiers against previous attacks, but it still remains vulnerable to some degrees. This raises
the main question of the paper: Can a classifier be robust to all types of attacks? The answer
seems to be negative in light of the strong adversarial examples that can be crafted by direct optimization procedures from or BID2 . Note that
the class of optimization-based attack is very large, as there is no restriction on the adversarial patterns that can be generated except for certain bounds such as l p -norm bounds. The vastness
of the optimization-based attack class is a hindrance to the study of the problem, as the defender cannot learn efficiently about the attack class from a finite number of samples. To study the
problem analytically, we use a class of learning-based attack that can be generated by a class of neural networks. This class of
attack can be considered an approximation of the class of optimization -based attacks, in that the search space of optimal perturbation is restricted to the parameter space of a neural network architecture, e.g., all perturbations that can be generated by fully-connected 3-layer ReLU networks. Similar to what
we propose, others have recently considered training neural networks to generate adversarial examples BID21 BID0 . While the proposed
learning-based attack is weaker than the optimization-based attack, it can generate adversarial examples in test time with only single feedforward passes, which makes real-time attacks possible. We also show that
the class of neural-network based attacks is quite different from the the class of gradient-based attacks (see Sec. 4.1.) Using the learning-based attack class, we introduce a continuous game formulation for analyzing the dynamics of attack-defense. The game is played
by an attacker and a defender/classifier 1 , where the attacker tries to maximize the risk of the classification task by perturbing input samples under certain constraints such as l p -norm bounds, and the defender/classifier tries to adjust its parameters to minimize the same risk given the perturbed inputs. It is important to
note that for adversarial attack problems, the performance of an attack or a defense cannot be measured in isolation, but only in pairs of (attack, defense) . This is because the
effectiveness of an attack/defense depends on the defense/attack it is against. As a two-player game
, there may not be a dominant defense that is no less robust than all other defenses against all attacks. However, there is a
natural notion of the best defense or attack in the worst case. Suppose one player
moves first by choosing her parameters and the other player responds with the knowledge of the first player's move. This is an example
of a leader-follower game BID1 for which there are two well-known states, the minimax and the maximin solutions if it is a constant-sum game. To find those solutions
empirically, we propose a new continuous optimization method using the sensitivity penalization term. We show that the minimax
solution from the proposed method is indeed different from the solution from the conventional alternating descent/ascent and is also more robust. We also show that the strength/weakness
of the minimax-trained classifier is different from that of adversarially-trained classifiers for gradient-based attacks. The contributions of this paper are summarized
as follows.• We provide a continuous game model to analyze
adversarial example attacks and defenses, using the neural network-based attack class as a feasible approximation to a larger but intractable class of optimization-based attacks.• We demonstrate the difficulty of defending against
multiple attack types and present the minimax defense as the best worst-case defense methods.• We propose a sensitivity-penalized optimization method
(Alg. 1) to numerically find continuous minimax solutions, which is better than alternating descent/ascent. The proposed optimization method can also be used for other
minimax problems beyond the adversarial example problem.The proposed methods are demonstrated with the MNIST and the CIFAR-10 datasets. For readability, details about experimental settings and the
results with CIFAR-10 are presented in the appendix.
In this paper, we present a continuous game formulation of adversarial attacks and defenses using a learning-based attack class implemented by neural networks.
We show that this class of attacks is quite different from the gradient-based attacks.
While a classifier robust to all types of attack may yet be an elusive goal, the minimax defense against the neural network-based attack class is well-defined and practically achievable.
We show that the proposed optimization method can find minimax defenses which are more robust than adversarially-trained classifiers and the classifiers from simple alternating descent/ascent.
We demonstrate these with MNIST and CIFAR-10. | A game-theoretic solution to adversarial attacks and defenses. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:671 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Supervised learning with irregularly sampled time series have been a challenge to Machine Learning methods due to the obstacle of dealing with irregular time intervals.
Some papers introduced recently recurrent neural network models that deals with irregularity, but most of them rely on complex mechanisms to achieve a better performance.
This work propose a novel method to represent timestamps (hours or dates) as dense vectors using sinusoidal functions, called Time Embeddings.
As a data input method it and can be applied to most machine learning models.
The method was evaluated with two predictive tasks from MIMIC III, a dataset of irregularly sampled time series of electronic health records.
Our tests showed an improvement to LSTM-based and classical machine learning models, specially with very irregular data.
An irregularly (or unevenly) sampled time series is a sequence of samples with irregular time intervals between observations.
This class of data add a time sparsity factor when the intervals between observations are large.
Most machine learning methods do not have time comprehension, this means they only consider observation order.
This makes it harder to learn time dependencies found in time series problems.
To solve this problem recent work propose models that are able to deal with such irregularity (Lipton et al., 2016; Bahadori & Lipton, 2019; Che et al., 2018; Shukla & Marlin, 2018) , but they often rely on complex mechanisms to represent irregularity or to impute missing data.
In this paper, we introduce a novel way to represent time as a dense vector representation, which is able to improve the expressiveness of irregularly sampled data, we call it Time Embeddings (TEs).
The proposed method is based on sinusoidal functions discretized to create a continuous representation of time.
TEs can make a model capable of estimating time intervals between observations, and they do so without the addition of any trainable parameters.
We evaluate the method with a publicly available real-world dataset of irregularly sampled electronic health records called MIMIC-III (Johnson et al., 2016) .
The tests were made with two tasks: a classification task (in-hospital mortality prediction) and a regression (length of stay).
To evaluate the impact of time representation in the data expressiveness we used LSTM and SelfAttentive LSTM models.
Both are common RNN models that have been reported to achieve great performance in several time series classification problems, and specifically with the MIMIC-III dataset (Lipton et al., 2016; Shukla & Marlin, 2018; Bahadori & Lipton, 2019; Zhang et al., 2018) .
We also evaluated simpler models such as linear and logistic regression and a shallow Multi Layer Perceptron.
All models were evaluated with and without TEs to asses possible improvements.
This paper propose a novel method to represent hour time or dates as dense vectors to improve irregularly sampled time series.
It was evaluated with two different approaches and evaluated in two tasks from the MIMIC III dataset.
Our method showed some improvement with most models tested, including recurrent neural networks and classic machine learning methods.
Despite being outperformed by binary masking in some tests we believe TEs can still be an viable option.
Specially to very irregular time series and high dimensional data, were TEs can be applied by addition without increasing the input dimensionality. | A novel method to create dense descriptors of time (Time Embeddings) to make simple models understand temporal structures | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:672 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models.
Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio.
By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective.
We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting.
We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases.
In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies.
The GNNs are achieved good performance on real-world datasets.
In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima.
Graph inference problems encompass a large class of tasks and domains, from posterior inference in probabilistic graphical models to community detection and ranking in generic networks, image segmentation or compressed sensing on non-Euclidean domains.
They are motivated both by practical applications, such as in the case of PageRank, and also by fundamental questions on the algorithmic hardness of solving such tasks.From a data-driven perspective, these problems can be formulated in unsupervised, semi-supervised or supervised learning settings.
In the supervised case, one assumes a dataset of graphs with labels on their nodes, edges or the entire graphs, and attempts to perform node-wise, edge-wise and graph-wise classification by optimizing a loss over a certain parametric class, e.g. neural networks.
Graph Neural Networks (GNNs) are natural extensions of Convolutional Neural Networks to graph-structured data, and have emerged as a powerful class of algorithms to perform complex graph inference leveraging labeled data (Gori et al., 2005; BID3 (and references therein).
In essence, these neural networks learn cascaded linear combinations of intrinsic graph operators interleaved with node-wise (or edge-wise) activation functions.
Since they utilize intrinsic graph operators, they can be applied to varying input graphs, and they offer the same parameter sharing advantages as their CNN counterparts.In this work, we focus on community detection problems, a wide class of node classification tasks that attempt to discover a clustered, segmented structure within a graph.
The algorithmic approaches to this problem include a rich class of spectral methods, which take advantage of the spectrum of certain operators defined on the graph, as well as approximate message-passing methods such as belief propagation (BP), which performs approximate posterior inference under predefined graphical models (Decelle et al., 2011) .
Focusing on the supervised setting, we study the ability of GNNs to approximate, generalize or even improve upon these class of algorithms.
Our motivation is two-fold.
On the one hand, this problem exhibits algorithmic hardness on some settings, opening up the possibility to discover more efficient algorithms than the current ones.
On the other hand, many practical scenarios fall beyond pre-specified probabilistic models, requiring data-driven solutions.We propose modifications to the GNN architecture, which allow it to exploit edge adjacency information, by incorporating the non-backtracking operator of the graph.
This operator is defined over the edges of the graph and allows a directed flow of information even when the original graph is undirected.
It was introduced to community detection problems by Krzakala et al. (2013) , who propose a spectral method based on the non-backtracking operator.
We refer to the resulting GNN model as a Line Graph Neural Network (LGNN).
Focusing on important random graph families exhibiting community structure, such as the stochastic block model (SBM) and the geometric block model (GBM), we demonstrate improvements in the performance by our GNN and LGNN models compared to other methods, including BP, even in regimes within the so-called computational-to-statistical gap.
A perhaps surprising aspect is that these gains can be obtained even with linear LGNNs, which become parametric versions of power iteration algorithms.We want to mention that besides community detection tasks, GNN and LGNN can be applied to other node-wise classification problems too.
The reason we are focusing on community detection problems is that this is a relatively well-studied setup, for which different algorithms have been proposed and where computational and statistical thresholds have been studied in several scenarios.
Moreover, synthetic datasets can be easily generated for community detection tasks.
Therefore, we think it is a nice setup for comparing different algorithms, besides its practical values.The good performances of GNN and LGNN motivate our second main contribution: the analysis of the optimization landscape of simplified and linear GNN models when trained with planted solutions of a given graph distribution.
Under reparametrization, we provide an upper bound on the energy gap controlling the energy difference between local and global minima (or minimum).
With some assumptions on the spectral concentration of certain random matrices, this energy gap will shrink as the size of the input graphs increases, which would mean that the optimization landscape is benign on large enough graphs.
In this work, we have studied data-driven approaches to supervised community detection with graph neural networks.
Our models achieve comparable performance to BP in binary SBM for various SNRs, and outperform BP in the sparse regime of 5-class SBM that falls between the computationalto-statistical gap.
This is made possible by considering a family of graph operators including the power graph adjacency matrices, and importantly by introducing the line graph equipped with the non-backtracking matrix.
We also provided a theoretical analysis of the optimization landscapes of simplified linear GNN for community detection and showed the gap between the loss value at local and global minima are bounded by quantities related to the concentration of certain random matricies.One word of caution is that our empirical results are inherently non-asymptotic.
Whereas models trained for given graph sizes can be used for inference on arbitrarily sized graphs (owing to the parameter sharing of GNNs), further work is needed in order to understand the generalization properties as |V | increases.
Nevertheless, we believe our work opens up interesting questions, namely better understanding how our results on the energy landscape depend upon specific signal-to-noise ratios, or whether the network parameters can be interpreted mathematically.
This could be useful in the study of computational-to-statistical gaps, where our model can be used to inquire about the form of computationally tractable approximations.
Another current limitation of our model is that it presumes a fixed number of communities to be detected.
Other directions of future research include the extension to the case where the number of communities is unknown and varied, or even increasing with |V |, as well as applications to ranking and edge-cut problems.
A PROOF OF THEOREM 5.1For simplicity and with an abuse of notation, in the remaining part we redefine L andL in the following way, to be the negative of their original definition in the main section: DISPLAYFORM0 .
Thus, minimizing the loss function (5) is equivalent to maximizing the function L n (β) redefined here.We write the Cholesky decomposition of EX n as EX n = R n R T n , and define DISPLAYFORM1 n ) T , and ∆B n = B n − I n .
Given a symmetric matrix K ∈ R M ×M , we let λ 1 (K), λ 2 (K), ..., λ M (K) denote the eigenvalues of K in nondecreasing order. | We propose a novel graph neural network architecture based on the non-backtracking matrix defined over the edge adjacencies and demonstrate its effectiveness in community detection tasks on graphs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:673 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Residual networks (Resnets) have become a prominent architecture in deep learning.
However, a comprehensive understanding of Resnets is still a topic of ongoing research.
A recent view argues that Resnets perform iterative refinement of features.
We attempt to further expose properties of this aspect.
To this end, we study Resnets both analytically and empirically.
We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase.
In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement.
In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features.
Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem.
Traditionally, deep neural network architectures (e.g. VGG Simonyan & Zisserman (2014) , AlexNet Krizhevsky et al. (2012) , etc.
) have been compositional in nature, meaning a hidden layer applies an affine transformation followed by non-linearity, with a different transformation at each layer.
However, a major problem with deep architectures has been that of vanishing and exploding gradients.
To address this problem, solutions like better activations (ReLU Nair & Hinton (2010) ), weight initialization methods Glorot & Bengio (2010) ; He et al. (2015) and normalization methods Ioffe & Szegedy (2015) ; BID0 have been proposed.
Nonetheless, training compositional networks deeper than 15 − 20 layers remains a challenging task.Recently, residual networks (Resnets He et al. (2016a) ) were introduced to tackle these issues and are considered a breakthrough in deep learning because of their ability to learn very deep networks and achieve state-of-the-art performance.
Besides this, performance of Resnets are generally found to remain largely unaffected by removing individual residual blocks or shuffling adjacent blocks Veit et al. (2016) .
These attributes of Resnets stem from the fact that residual blocks transform representations additively instead of compositionally (like traditional deep networks).
This additive framework along with the aforementioned attributes has given rise to two school of thoughts about Resnets-the ensemble view where they are thought to learn an exponential ensemble of shallower models Veit et al. (2016) , and the unrolled iterative estimation view Liao & Poggio (2016) ; Greff et al. (2016) , where Resnet layers are thought to iteratively refine representations instead of learning new ones.
While the success of Resnets may be attributed partly to both these views, our work takes steps towards achieving a deeper understanding of Resnets in terms of its iterative feature refinement perspective.
Our contributions are as follows:1.
We study Resnets analytically and provide a formal view of iterative feature refinement using Taylor's expansion, showing that for any loss function, a residual block naturally encourages representations to move along the negative gradient of the loss with respect to hidden representations.
Each residual block is therefore encouraged to take a gradient step in order to minimize the loss in the hidden representation space.
We empirically confirm this by measuring the cosine between the output of a residual block and the gradient of loss with respect to the hidden representations prior to the application of the residual block.2.
We empirically observe that Resnet blocks can perform both hierarchical representation learning (where each block discovers a different representation) and iterative feature refinement (where each block improves slightly but keeps the semantics of the representation of the previous layer).
Specifically in Resnets, lower residual blocks learn to perform representation learning, meaning that they change representations significantly and removing these blocks can sometimes drastically hurt prediction performance.
The higher blocks on the other hand essentially learn to perform iterative inference-minimizing the loss function by moving the hidden representation along the negative gradient direction.
In the presence of shortcut connections 1 , representation learning is dominantly performed by the shortcut connection layer and most of residual blocks tend to perform iterative feature refinement.3.
The iterative refinement view suggests that deep networks can potentially leverage intensive parameter sharing for the layer performing iterative inference.
But sharing large number of residual blocks without loss of performance has not been successfully achieved yet.
Towards this end we study two ways of reusing residual blocks:
1. Sharing residual blocks during training;
2. Unrolling a residual block for more steps that it was trained to unroll.
We find that training Resnet with naively shared blocks leads to bad performance.
We expose reasons for this failure and investigate a preliminary fix for this problem.
Our main contribution is formalizing the view of iterative refinement in Resnets and showing analytically that residual blocks naturally encourage representations to move in the half space of negative loss gradient, thus implementing a gradient descent in the activation space (each block reduces loss and improves accuracy).
We validate theory experimentally on a wide range of Resnet architectures.We further explored two forms of sharing blocks in Resnet.
We show that Resnet can be unrolled to more steps than it was trained on.
Next, we found that counterintuitively training residual blocks with shared blocks leads to overfitting.
While we propose a variant of batch normalization to mitigate it, we leave further investigation of this phenomena for future work.
We hope that our developed formal view, and practical results, will aid analysis of other models employing iterative inference and residual connections.
∂ho , then it is equivalent to updating the parameters of the convolution layer using a gradient update step.
To see this, consider the change in h o from updating parameters using gradient descent with step size η.
This is given by, DISPLAYFORM0 Thus, moving h o in the half space of − ∂L ∂ho has the same effect as that achieved by updating the parameters W, b using gradient descent.
Although we found this insight interesting, we don't build upon it in this paper.
We leave this as a future work. | Residual connections really perform iterative inference | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:674 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We develop end-to-end learned reconstructions for lensless mask-based cameras, including an experimental system for capturing aligned lensless and lensed images for training.
Various reconstruction methods are explored, on a scale from classic iterative approaches (based on the physical imaging model) to deep learned methods with many learned parameters.
In the middle ground, we present several variations of unrolled alternating direction method of multipliers (ADMM) with varying numbers of learned parameters.
The network structure combines knowledge of the physical imaging model with learned parameters updated from the data, which compensate for artifacts caused by physical approximations.
Our unrolled approach is 20X faster than classic methods and produces better reconstruction quality than both the classic and deep methods on our experimental system. | We improve the reconstruction time and quality on an experimental mask-based lensless imager using an end-to-end learning approach which incorporates knowledge of the imaging model. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:675 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years.
With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data.
Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of result explainability, deep learning has also suffered from lots of criticism.
In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models.
Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations.
The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture.
The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies.
From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and results.
Extensive experiments have been done on several different real-world benchmark datasets, and the experimental results obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models.
In recent years, deep learning, a rebranding of deep neural network research works, has achieved a remarkable success.
The essence of deep learning is to compute the hierarchical feature representations of the observational data BID8 ; BID16 .
With multiple hidden layers, the deep learning models have the capacity to capture very good projections from the input data space to the objective output space, whose outstanding performance has been widely illustrated in various applications, including speech and audio processing BID7 ; , language modeling and processing BID0 ; BID19 , information retrieval BID10 ; BID22 , objective recognition and computer vision BID16 , as well as multimodal and multi-task learning BID27 BID28 .
By this context so far, various kinds of deep learning models have been proposed already, including deep belief network BID11 , deep Boltzmann machine BID22 , deep neural network BID13 ; BID14 and deep autoencoder model BID24 .Meanwhile
, deep learning models also suffer from several serious criticism due to their several severe disadvantages BID29 . Generally
, learning and training deep learning models usually demands (1) a large amount of training data, (2) large and powerful computational facilities, (3) heavy parameter tuning costs, but lacks (4) theoretic explanation of the learning process and results. These disadvantages
greatly hinder the application of deep learning models in many areas which cannot meet the requirements or requests a clear interpretability of the learning performance. Due to these reasons
, by this context so far, deep learning research and application works are mostly carried out within/via the collaboration with several big technical companies, but the models proposed by them (involving hundreds of hidden layers, billions of parameters, and using a large cluster with thousands of server nodes BID5 ) can hardly be applied in other real-world applications.In this paper, we propose a brand new model, namely SEGEN (Sample-Ensemble Genetic Evolutionary Network), which can work as an alternative approach to the deep learning models. Instead of building
one single model with a deep architecture, SEGEN adopts a genetic-evolutionary learning strategy to train a group of unit models generations by generations. Here, the unit models
can be either traditional machine learning models or deep learning models with a much "narrower" and "shallower" structure. Each unit model will
be trained with a batch of training instances sampled form the dataset. By selecting the good
unit models from each generation (according to their performance on a validation set), SEGEN will evolve itself and create the next generation of unit modes with probabilistic genetic crossover and mutation, where the selection and crossover probabilities are highly dependent on their performance fitness evaluation. Finally, the learning
results of the data instances will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. These terms and techniques
mentioned here will be explained in great detail in Section 4. Compared with the existing
deep learning models, SEGEN have several great advantages, and we will illustrate them from both the bionics perspective and the computational perspective as follows.From the bionics perspective, SEGEN effectively models the evolution of creatures from generations to generations, where the creatures suitable for the environment will have a larger chance to survive and generate the offsprings. Meanwhile, the offsprings
inheriting good genes from its parents will be likely to adapt to the environment as well. In the SEGEN model, each
unit network model in generations can be treated as an independent creature, which will receive a different subsets of training instances and learn its own model variables. For the unit models suitable
for the environment (i.e., achieving a good performance on a validation set), they will have a larger chance to generate their child models. The parent model achieving better
performance will also have a greater chance to pass their variables to the child model.From the computational perspective, SEGEN requires far less data and resources, and also has a sound theoretic explanation of the learning process and results. The unit models in each generation
of SEGEN are of a much simpler architecture, learning of which can be accomplished with much less training data, less computational resources and less hyper-parameter tuning efforts. In addition, the training dataset
pool, model hyper-parameters are shared by the unit models, and the increase of generation size (i.e., unit model number in each generation) or generation number (i.e., how many generation rounds will be needed) will not increase the learning resources consumption. The relatively "narrower" and "shallower
" structure of unit models will also significantly enhance the interpretability of the unit models training process as well as the learning results, especially if the unit models are the traditional non-deep learning models. Furthermore, the sound theoretical foundations
of genetic algorithm and ensemble learning will also help explain the information inheritance through generations and result ensemble in SEGEN. In this paper, we will use network embedding problem
BID25 BID2 ; BID20 (applying autoencoder as the unit model) as an example to illustrate the SEGEN model. Meanwhile, applications of SEGEN on other data categories
(e.g., images and raw feature inputs) with CNN and MLP as the unit model will also be provided in Section 5.3. The following parts of this paper are organized as follows
. The problem formulation is provided in Section 3. Model SEGEN
will be introduced in Section 4, whose performance
will be evaluated in Section 5. Finally, Section 2 introduces the related works and we conclude
this paper in Section 6.
In this paper, we have introduced an alternative approach to deep learning models, namely SEGEN.
Significantly different from the existing deep learning models, SEGEN builds a group of unit models generations by generations, instead of building one single model with extremely deep architectures.
The choice of unit models covered in SEGEN can be either traditional machine learning models or the latest deep learning models with a "smaller" and "narrower" architecture.
SEGEN has great advantages over deep learning models, since it requires much less training data, computational resources, parameter tuning efforts but provides more information about its learning and result integration process.
The effectiveness of efficiency of SEGEN have been well demonstrated with the extensive experiments done on the real-world network structured datasets. | We introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:676 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
How can we teach artificial agents to use human language flexibly to solve problems in a real-world environment?
We have one example in nature of agents being able to solve this problem: human babies eventually learn to use human language to solve problems, and they are taught with an adult human-in-the-loop.
Unfortunately, current machine learning methods (e.g. from deep reinforcement learning) are too data inefficient to learn a language in this way (3).
An outstanding goal is finding an algorithm with a suitable ‘language learning prior’ that allows it to learn human language, while minimizing the number of required human interactions.
In this paper, we propose to learn such a prior in simulation, leveraging the increasing amount of available compute for machine learning experiments (1).
We call our approach Learning to Learn to Communicate (L2C).
Specifically, in L2C we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol.
Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans.
To show the promise of the L2C framework, we conduct some preliminary experiments in a Lewis signaling game (4), where we show that agents
trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents.
Language is one of the most important aspects of human intelligence; it allows humans to coordinate and share knowledge with each other.
We will want artificial agents to understand language as it is a natural means for us to specify their goals.So how can we train agents to understand language?
We adopt the functional view of language BID16 that has recently gained popularity (8; 14) : agents understand language when they can use language to carry out tasks in the real world.
One approach to training agents that can use language in their environment is via emergent communication, where researchers train randomly initialized agents to solve tasks requiring communication (7; 16 ).
An open question in emergent communication is how the resulting communication protocols can be transferred to learning human language.
Existing approaches attempt to do this using auxiliary tasks, for example having agents predict the label of an image in English while simultaneously playing an image-based referential game BID11 .
While this works for learning the names of objects, it's unclear if simply using an auxiliary loss will scale to learning the English names of complex concepts, or learning to use English to interact in an grounded environment.One approach that we know will work (eventually) for training language learning agents is using a human-in-the-loop, as this is how human babies acquire language.
In other words, if we had a good enough model architecture and learning algorithm, the human-in-the-loop approach should work.
However, recent work in this direction has concluded that current algorithms are too sample inefficient to effectively learn a language with compositional properties from humans (3).
Human guidance is expensive, and thus we would want such an algorithm to be as sample efficient as possible.
An open problem is thus to create an algorithm or training procedure that results in increased sampleefficiency for language learning with a human-in-the-loop.In this paper, we present the Learning to Learn to Communicate (L2C) framework, with the goal of training agents to quickly learn new (human) languages.
The core idea behind L2C is to leverage the increasing amount of available compute for machine learning experiments (1) to learn a 'language learning prior' by training agents via meta-learning in Figure 1 .
Diagram of the L2C framework.
An advantage of L2C is that agents can be trained in an external environment (which grounds the language), where agents interact with the environment via actions and language.
Thus, (in theory) L2C could be scaled to learn complicated grounded tasks involving language.simulation.
Specifically, we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol.
Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans.
The L2C framework has two main advantages: (1) permits for agents to learn language that is grounded in an environment with which the agents can interact (i.e. it is not limited to referential games); and (2) in contrast with work from the instruction following literature (2), agents can be trained via L2C to both speak (output language to help accomplish their goal) and listen (map from the language to a goal or sequence of actions).To
show the promise of the L2C framework, we provide some preliminary experiments in a Lewis signaling game BID3 . Specifically
, we show that agents trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents. These preliminary
results suggest that L2C is a promising framework for training agents to learn human language from few human interactions. | We propose to use meta-learning for more efficient language learning, via a kind of 'domain randomization'. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:677 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization.
This allows us to explicitly search for schedules that achieve good generalization.
We describe the structure of the gradient of a validation error w.r.t.
the learning rates, the hypergradient, and based on this we introduce a novel online algorithm.
Our method adaptively interpolates between two recently proposed techniques (Franceschi et al., 2017; Baydin et al.,2018), featuring increased stability and faster convergence.
We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.
Learning rate (LR) adaptation for first-order optimization methods is one of the most widely studied aspects in optimization for learning methods -in particular neural networks -with early work dating back to the origins of connectionism (Jacobs, 1988; Vogl et al., 1988) .
More recent work focused on developing complex schedules that depend on a small number of hyperparameters (Loshchilov & Hutter, 2017; Orabona & Pál, 2016) .
Other papers in this area have focused on the optimization of the (regularized) training loss (Schaul et al., 2013; Baydin et al., 2018; Wu et al., 2018) .
While quick optimization is desirable, the true goal of supervised learning is to minimize the generalization error, which is commonly estimated by holding out part of the available data for validation.
Hyperparameter optimization (HPO), a related but distinct branch of the literature, specifically focuses on this aspect, with less emphasis on the goal of rapid convergence on a single task.
Research in this direction is vast (see Hutter et al. (2019) for an overview) and includes model-based (Snoek et al., 2012; Hutter et al., 2015) , model-free (Bergstra & Bengio, 2012; Hansen, 2016) , and gradientbased (Domke, 2012; Maclaurin et al., 2015) approaches.
Additionally, works in the area of learning to optimize (Andrychowicz et al., 2016; Wichrowska et al., 2017) have focused on the problem of tuning parameterized optimizers on whole classes of learning problems but require prior expensive optimization and are not designed to speed up training on a single specific task.
The goal of this paper is to automatically compute in an online fashion a learning rate schedule for stochastic optimization methods (such as SGD) only on the basis of the given learning task, aiming at producing models with associated small validation error.
We study the problem of finding a LR schedule under the framework of gradient-based hyperparameter optimization (Franceschi et al., 2017) : we consider as an optimal schedule η * = (η * 0 , . . . , η * T −1 ) ∈ R T + a solution to the following constrained optimization problem min{f T (η) = E(w T (η)) : η ∈ R T + } s.t. w 0 =w, w t+1 (η) = Φ t (w t (η), η t )
for t = {0, . . . , T − 1} = [T ] , where E : R d → R + is an objective function, Φ t :
is a (possibly stochastic) weight update dynamics,w ∈ R d represents the initial model weights (parameters) and finally w t are the weights after t iterations.
We can think of E as either the training or the validation loss of the model, while the dynamics Φ describe the update rule (such as SGD, SGD-Momentum, Adam etc.).
For example in the case of SGD, Φ t (w t , η t ) = w t − η t ∇L t (w t ), with L t (w t ) the (possibly regularized) training loss on the t-th minibatch.
The horizon T should be large enough so that the training error can be effectively minimized, in order to avoid underfitting.
Note that a too large value of T does not necessarily harm since η k = 0 for k >T is still a feasible solution, implementing early stopping in this setting.
Finding a good learning rate schedule is an old but crucially important issue in machine learning.
This paper makes a step forward, proposing an automatic method to obtain performing LR schedules that uses an adaptive moving average over increasingly long hypergradient approximations.
MARTHE interpolates between HD and RTHO taking the best of the two worlds.
The implementation of our algorithm is fairly simple within modern automatic differentiation and deep learning environments, adding only a moderate computational overhead over the underlying optimizer complexity.
In this work, we studied the case of optimizing the learning rate schedules for image classification tasks; we note, however, that MARTHE is a general technique for finding online hyperparameter schedules (albeit it scales linearly with the number of hyperparameters), possibly implementing a competitive alternative in other application scenarios, such as tuning regularization parameters (Luketina et al., 2016) .
We plan to further validate the method both in other learning domains for adapting the LR and also to automatically tune other crucial hyperparameters.
We believe that another interesting future research direction could be to learn the adaptive rules for µ and β in a meta learning fashion. | MARTHE: a new method to fit task-specific learning rate schedules from the perspective of hyperparameter optimization | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:678 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions.
These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass.
They are unable to generate an image interactively based on an incrementally additive text description (something that is more intuitive and similar to the way we describe an image).
We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs).
We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information.
Our model utilizes Graph Convolutional Networks (GCN) to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training.
We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs.
To truly understand the visual world, our models should be able to not only recognize images but also generate them.
Generative Adversarial Networks, proposed by BID3 have proven immensely useful in generating real world images.
GANs are composed of a generator and a discriminator that are trained with competing goals.
The generator is trained to generate samples towards the true data distribution to fool the discriminator, while the discriminator is optimized to distinguish between real samples from the true data distribution and fake samples produced by the generator.
The next step in this area is to generate customized images and videos in response to the individual tastes of a user.
A grounding of language semantics in the context of visual modality has wide-reaching impacts in the fields of Robotics, AI, Design and image retrieval.
To this end, there has been exciting recent progress on generating images from natural language descriptions.
Conditioned on given text descriptions, conditional-GANs BID11 are able to generate images that are highly related to the text meanings.
Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts.Leading methods for generating images from sentences struggle with complex sentences containing many objects.
A recent development in this field has been to represent the information conveyed by a complex sentence more explicitly as a scene graph of objects and their relationships BID7 .
Scene graphs are a powerful structured representation for both images and language; they have been used for semantic image retrieval BID6 and for evaluating BID0 and improving BID9 image captioning.
In our work, we propose to leverage these scene graphs by incrementally expanding them into more complex structures and generating corresponding images.
Most of the current approaches lack the ability to generate images incrementally in multiple steps while preserving the contents of the image generated so far.
We overcome this shortcoming by conditioning the image generation process over the cumulative image generated over the previous steps and over the unseen parts of the scene graph.
This allows our approach to generate high quality complex real-world scenes with several objects by distributing the image generation over multiple steps without losing the context.
Recently, BID2 proposed an approach for incremental image generation but their method is limited to synthetic images due to the need of supervision in the intermediate step.
Our approach circumvents the need for intermediate supervision by enforcing perceptual regularization and is therefore compatible with training for even real world images (as we show later).A
visualization of our framework's outputs with a progressively growing scene graph can be seen in Figure 1 . We
can see how at each step new objects get inserted into the image generated so far without losing the context. To
summarize, we make the following contributions,• We present a framework to generate images from structured scene graphs that allows the images to be interactively modified, while preserving the context and contents of the image generated over previous steps.• Our
method does not need any kind of intermediate supervision and hence, is not limited to synthetic images (where you can manually generate ground truth intermediate images). It is
therefore useful for generating real-world images (such as for MS-COCO) which, to the best of our knowledge, is the first attempt of its kind.
In this paper, we proposed an approach to sequentially generate images using incrementally growing scene graphs with context preservation.
Through extensive evaluation and qualitative results, we demonstrate that our approach is indeed able to generate an image sequence that is consistent over time and preserves the context in terms of objects generated in previous steps.
In future, we plan to explore generating end-to-end with text description by augmenting our methodology with module to generate scene graphs from language input.
While scene-graphs provide a very convenient modality to capture image semantics, we would like to explore ways to take natural sentences as inputs to modify the underlying scene graph.
The current baseline method does single shot generation by passing the entire layout map through the Cascade Refinement Net for the final image generation.
We plan to investigate whether the quality of generation can be improved by instead using attention on the GCN embeddings during generation.
This could also potentially make the task of only modifying certain regions in the image easier.
Further, we plan to explore better architectures for image generation through layouts for higher resolution image generation. | Interactively generating image from incrementally growing scene graphs in multiple steps using GANs while preserving the contents of image generated in previous steps | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:679 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images.
A fundamental characteristic of generative models is their ability to produce multi-modal outputs.
However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution.
In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples.
DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity.
We use DPP kernel to model the diversity in real data as well as in synthetic data.
Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data.
In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme.
Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality.
Our code will be made publicly available.
Deep generative models have gained enormous research interest in recent years as a powerful framework to learn high dimensional data in an unsupervised fashion.
Generative Adversarial Networks (GANs) BID10 and Variational AutoEncoders (VAEs) are among the most dominant generative approaches.
They consist of training two networks: a generator (decoder) and a discriminator (encoder), where the generator attempts to map random noise to fake data points that simulate the probability distribution of real data.
. GANs are typically associated with higher quality images compared to VAEs. Nevertheless, in the process of learning multi-modal complex distributions, both models may converge to a trivial solution where the generator learns to produce few modes exclusively, as referred to by mode collapse problem.To address this, we propose utilizing Determinantal Point Processes (DPP) to model the diversity within data samples. DPP is a probabilistic model that has been mainly adopted for solving subset selection problems with diversity constraints BID21 , such as video and document summarization. However, Sampling from a DPP requires quantifying the diversity of 2 N subsets, where N is the size of the ground set. This renders DPP sampling from true data to be computationally inefficient in the generation domain. The key idea of our work is to model the diversity within real and fake data throughout the training process, which does adds an insignificant computational cost. Then, We encourage producing samples of similar diversity distribution to the true-data by back-propagating the DPP metric through the generator. This way, generator explicitly learns to cover more modes of real distribution, and accordingly alleviates mode collapse.Recent approaches tackled mode-collapse in one of two different ways: (1) improving the learning of the system to reach a better convergence point(e.g. BID28 ; BID0 ); or (2) explicitly enforcing the models to capture diverse modes or map back to the true-data distribution (e.g. BID37 ; BID2 ). Here we focus on a relaxed version of the former, where we use the same learning paradigm of the standard GANs and only change the objective function. The advantage of such an approach is to avoid adding any extra trainable parameters to the trained system while maintaining the same back-propagation steps as the standard GANs. Thus, our model converges faster to a fair equilibrium point where the generator captures the diversity of the true-data distribution while preserving the quality of generations.Contribution. We introduce a new loss function, that we denote Generative Determinantal Point Processes (GDPP) loss. Our loss only assumes an access to a generator G, a feature extraction function φ(·), and sampler from true data distribution p d . The loss encourages the generator to diversify generated samples that match the diversity of real data.This criterion can be considered as a complement to the original adversarial loss which attempts to learn an indistinguishable distribution from the true-data distribution without being specific to diverse modes. We assess the performance of GDPP on three different synthetic data environments, while also verifying the superiority on three real-world images datasets. We compared our approach with state-of-the-art approaches of more complex architectures and learning paradigms. Experiments show that our method outperforms all competing methods in terms of alleviating modecollapse and generations quality.
In this work, we introduce a novel criterion to train generative networks on capturing a similar diversity to one of the true data by utilizing Determinantal Point Process(DPP).
We apply our criterion to Generative Adversarial training and the Variational Autoencoder by learning a kernel via features extracted from the discriminator/encoder.
We train the generator on optimizing a loss between the fake and real, eigenvalues and eigenvectors of this kernel to simulate the diversity of the real data.
Our GDPP framework accumulates many desirable properties: it does not require any extra trainable parameters, it operates in an unsupervised setting, yet it consistently outperforms stateof-the-art methods on a battery of synthetic data and real image datasets as measure by generation quality and invariance to mode collapse.
Furthermore, GDPP-GANs exhibit a stabilized adversarial training and has been shown to be time and data efficient as compared to state-of-the-art approaches.
Moreover, the GDPP criterion is architecture and model invariant, allowing it to be embedded with any variants of generative models such as adversarial feature learning or conditional GANs. | The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:68 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images.
However, most Convolutional Neural Networks (CNNs) for image classification were developed using biased datasets that contain large objects, in mostly central image positions.
To assess whether classical CNN architectures work well for tiny object classification we build a comprehensive testbed containing two datasets: one derived from MNIST digits and one from histopathology images.
This testbed allows controlled experiments to stress-test CNN architectures with a broad spectrum of signal-to-noise ratios.
Our observations indicate that: (1) There exists a limit to signal-to-noise below which CNNs fail to generalize and that this limit is affected by dataset size - more data leading to better performances; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the object-to-image ratio (2) in general, higher capacity models exhibit better generalization; (3) when knowing the approximate object sizes, adapting receptive field is beneficial; and (4) for very small signal-to-noise ratio the choice of global pooling operation affects optimization, whereas for relatively large signal-to-noise values, all tested global pooling operations exhibit similar performance.
Convolutional Neural Networks (CNNs) are the current state-of-the-art approach for image classification (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015; Huang et al., 2017) .
The goal of image classification is to assign an image-level label to an image.
Typically, it is assumed that an object (or concept) that correlates with the label is clearly visible and occupies a significant portion of the image Krizhevsky, 2009; Deng et al., 2009 ).
Yet, in a variety of real-life applications, such as medical image or hyperspectral image analysis, only a small portion of the input correlates with the label, resulting in low signal-to-noise ratio.
We define this input image signal-to-noise ratio as Object to Image (O2I) ratio.
The O2I ratio range for three real-life datasets is depicted in Figure 1 .
As can be seen, there exists a distribution shift between standard classification benchmarks and domain specific datasets.
For instance, in the ImageNet dataset (Deng et al., 2009 ) objects fill at least 1% of the entire image, while in histopathology slices (Ehteshami Bejnordi et al., 2017) cancer cells can occupy as little as 10 −6 % of the whole image.
Recent works have studied CNNs under different noise scenarios, either by performing random input-to-label experiments (Zhang et al., 2017; or by directly working with noisy annotations (Mahajan et al., 2018; Jiang et al., 2017; Han et al., 2018) .
While, it has been shown that large amounts of label-corruption noise hinders the CNNs generalization (Zhang et al., 2017; , it has been further demonstrated that CNNs can mitigate this label-corruption noise by increasing the size of training data (Mahajan et al., 2018) , tuning the optimizer hyperparameters (Jastrzębski et al., 2017) or weighting input training samples (Jiang et al., 2017; Han et al., 2018) .
However, all these works focus on input-to-label corruption and do not consider the case of noiseless input-to-label assignments with low and very low O2I ratios.
In this paper, we build a novel testbed allowing us to specifically study the performance of CNNs when applied to tiny object classification and to investigate the interplay between input signal-to-noise ratio and model generalization.
We create two synthetic datasets inspired by the children's puzzle book Where's Wally?
(Handford, 1987) .
The first dataset is derived from MNIST digits and allows us for two medical imaging datasets (CAME-LYON17 (Ehteshami Bejnordi et al., 2017) and MiniMIAS (Suckling, 1994) ) as well as one standard computer vision classification dataset (ImageNet (Deng et al., 2009) ).
The ratio is defined as O2I =
Although low input image signal-to-noise scenarios have been extensively studied in signal processing field (e.g. in tasks such as image reconstruction), less attention has been devoted to low signal-tonoise classification scenarios.
Thus, in this paper we identified an unexplored machine learning problem, namely image classification in low and very low signal-to-noise ratios.
In order to study such scenarios, we built two datasets that allowed us to perform controlled experiments by manipulating the input image signal-to-noise ratio and highlighted that CNNs struggle to show good generalization for low and very low signal-to-noise ratios even for a relatively elementary MNIST-based dataset.
Finally, we ran a series of controlled experiments 9 that explore both a variety of CNNs' architectural choices and the importance of training data scale for the low and very low signal-to-noise classification.
One of our main observation was that properly designed CNNs can be trained in low O2I regime without using any pixel-level annotations and generalize if we leverage enough training data; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the O2I ratio.
Thus, with our paper (and the code release) we invite the community to work on data-efficient solutions to low and very low signal-to-noise classification.
Our experimental study exhibits limitations: First, due to the lack of large scale datasets that allow for explicit control of the input signal-to-noise ratios, we were forced to use the synthetically built nMNIST dataset for most of our analysis.
As a real life dataset, we used crops from the histopathology CAMELYON dataset; however, due to relatively a small number of unique lesions we were unable to scale the histopathology experiments to the extent as the nMNIST experiments, and, as result, some conclusions might be affected by the limited dataset size.
Other large scale computer vision datasets like MS COCO (Lin et al., 2014 ) exhibit correlations of the object of interest with the image background.
For MS COCO, the smallest O2I ratios are for the object category "sports ball" which on average occupies between 0.3% and 0.4% of an image and its presence tends to be correlated with the image background (e. g. presence of sports fields and players).
However, future research could examine a setup in which negative images contain objects of the categories "person" and "baseball bat" and positive images also contain "sports ball".
Second, all the tested models improve the generalization with larger dataset sizes; however, scaling datasets such as CAMELYON to tens of thousands of samples might be prohibitively expensive.
Instead, further research should be devoted to developing computationally-scalable, data-efficient inductive biases that can handle very low signal-to-noise ratios with limited dataset sizes.
Future work, could explore the knowledge of the low O2I ratio and therefore sparse signal as an inductive bias.
Finally, we studied low signal-to-noise scenarios only for binary classification scenarios 10 ; further investigation should be devoted to multiclass problems.
We hope that this study will stimulate the research in image classification for low signal-to-noise input scenarios. | We study low- and very-low-signal-to-noise classification scenarios, where objects that correlate with class label occupy tiny proportion of the entire image (e.g. medical or hyperspectral imaging). | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:680 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block.
Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks.
This raises the question: do learned attention layers operate similarly to convolutional layers?
This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice.
Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer.
Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis.
Our code is publicly available.
Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer (Vaswani et al., 2017) .
Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 (Radford et al., 2018) , BERT (Devlin et al., 2018) and Transformer-XL , seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks.
The key difference between transformers and previous methods, such as recurrent neural networks (Hochreiter & Schmidhuber, 1997) and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence.
This is made possible thanks to the attention mechanism-originally introduced in Neural Machine Translation to better handle long-range dependencies (Bahdanau et al., 2015) .
With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations.
The representation of each word is then updated based on those words whose attention score is highest.
Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks.
Self-attention was first added to CNN by either using channel-based attention (Hu et al., 2018) or non-local relationships across the image (Wang et al., 2018) .
More recently, augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks.
Interestingly, Ramachandran et al. (2019) noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy.
These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers?
From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function-including a CNN.
Indeed, Pérez et al. (2019) showed that a multilayer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic.
Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so.
Thus, the question of how self-attention layers actually process images remains open.
We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content.
More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters-similar to deformable convolutions (Dai et al., 2017; Zampieri, 2019) .
Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. | A self-attention layer can perform convolution and often learns to do so in practice. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:681 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an $n \times d$ matrix $A$, and a parameter $k$, compute a rank-$k$ matrix $A'$ that minimizes the approximation loss $||A- A'||_F$.
The algorithm uses a training set of input matrices in order to optimize its performance.
Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection $SA$, where $S$ is a sparse random $m \times n$ “sketching matrix”, and then performing the singular value decomposition of $SA$.
We show how to replace the random matrix $S$ with a “learned” matrix of the same sparsity to reduce the error.
Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude.
We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees.
The success of modern machine learning made it applicable to problems that lie outside of the scope of "classic AI".
In particular, there has been a growing interest in using machine learning to improve the performance of "standard" algorithms, by fine-tuning their behavior to adapt to the properties of the input distribution, see e.g., [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] .
This "learning-based" approach to algorithm design has attracted a considerable attention over the last few years, due to its potential to significantly improve the efficiency of some of the most widely used algorithmic tasks.
Many applications involve processing streams of data (video, data logs, customer activity etc) by executing the same algorithm on an hourly, daily or weekly basis.
These data sets are typically not "random" or "worst-case"; instead, they come from some distribution which does not change rapidly from execution to execution.
This makes it possible to design better algorithms tailored to the specific data distribution, trained on past instances of the problem.
The method has been particularly successful in the context of compressed sensing.
In the latter framework, the goal is to recover an approximation to an n-dimensional vector x, given its "linear measurement" of the form Sx, where S is an m × n matrix.
Theoretical results [14, 15] show that, if the matrix S is selected at random, it is possible to recover the k largest coefficients of x with high probability using a matrix S with m = O(k log n) rows.
This guarantee is general and applies to arbitrary vectors x.
However, if vectors x are selected from some natural distribution (e.g., they represent images), recent works [8, 9, 11] show that one can use samples from that distribution to compute matrices S that improve over a completely random matrix in terms of the recovery error.
Compressed sensing is an example of a broader class of problems which can be solved using random projections.
Another well-studied problem of this type is low-rank decomposition: given an n × d matrix A, and a parameter k, compute a rank-k matrix
Low-rank approximation is one of the most widely used tools in massive data analysis, machine learning and statistics, and has been a subject of many algorithmic studies.
In particular, multiple algorithms developed over the last decade use the "sketching" approach, see e.g., [16] [17] [18] [19] [20] [21] [22] [23] [24] .
Its idea is to use efficiently computable random projections (a.k.a., "sketches") to reduce the problem size before performing low-rank decomposition, which makes the computation more space and time efficient.
For example, [16, 19] show that if S is a random matrix of size m × n chosen from an appropriate distribution, for m depending on , then one can recover a rank-k matrix A such that
by performing an SVD on SA ∈ R m×d followed by some post-processing.
Typically the sketch length m is small, so the matrix SA can be stored using little space (in the context of streaming algorithms) or efficiently communicated (in the context of distributed algorithms).
Furthermore, the SVD of SA can be computed efficiently, especially after another round of sketching, reducing the overall computation time.
See the survey [25] for an overview of these developments.
In light of the aforementioned work on learning-based compressive sensing, it is natural to ask whether similar improvements in performance could be obtained for other sketch-based algorithms, notably for low-rank decompositions.
In particular, reducing the sketch length m while preserving its accuracy would make sketch-based algorithms more efficient.
Alternatively, one could make sketches more accurate for the same values of m.
This is the problem we address in this paper.
Our Results.
Our main finding is that learned sketch matrices can indeed yield (much) more accurate low-rank decompositions than purely random matrices.
We focus our study on a streaming algorithm for low-rank decomposition due to [16, 19] , described in more detail in Section 2.
Specifically, suppose we have a training set of matrices Tr = {A 1 , . . . , A N } sampled from some distribution D. Based on this training set, we compute a matrix S * that (locally) minimizes the empirical loss
where SCW(S * , A i ) denotes the output of the aforementioned Sarlos-Clarkson-Woodruff streaming low-rank decomposition algorithm on matrix A i using the sketch matrix S * .
Once the the sketch matrix S * is computed, it can be used instead of a random sketch matrix in all future executions of the SCW algorithm.
We demonstrate empirically that, for multiple types of data sets, an optimized sketch matrix S * can substantially reduce the approximation loss compared to a random matrix S, sometimes by one order of magnitude (see Figure 1) .
Equivalently, the optimized sketch matrix can achieve the same approximation loss for lower values of m.
A possible disadvantage of learned sketch matrices is that an algorithm that uses them no longer offers worst-case guarantees.
As a result, if such an algorithm is applied to an input matrix that does not conform to the training distribution, the results might be worse than if random matrices were used.
To alleviate this issue, we also study mixed sketch matrices, where (say) half of the rows are trained and the other half are random.
We observe that if such matrices are used in conjunction with the SCW algorithm, its results are no worse than if only the random part of the matrix was used 2 .
Thus, the resulting algorithm inherits the worst-case performance guarantees of the random part of the sketching matrix.
At the same time, we show that mixed matrices still substantially reduce the approximation loss compared to random ones, in some cases nearly matching the performance of "pure" learned matrices with the same number of rows.
Thus, mixed random matrices offer "the best of both worlds": improved performance for matrices from the training distribution, and worst-case guarantees otherwise. | Learning-based algorithms can improve upon the performance of classical algorithms for the low-rank approximation problem while retaining the worst-case guarantee. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:682 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural conversational models are widely used in applications like personal assistants and chat bots.
These models seem to give better performance when operating on word level.
However, for fusion languages like French, Russian and Polish vocabulary size sometimes become infeasible since most of the words have lots of word forms.
We propose a neural network architecture for transforming normalized text into a grammatically correct one.
Our model efficiently employs correspondence between normalized and target words and significantly outperforms character-level models while being 2x faster in training and 20\% faster at evaluation.
We also propose a new pipeline for building conversational models: first generate a normalized answer and then transform it into a grammatically correct one using our network.
The proposed pipeline gives better performance than character-level conversational models according to assessor testing.
Neural conversational models BID18 are used in a large number of applications: from technical support and chat bots to personal assistants.
While being a powerful framework, they often suffer from high computational costs.The main computational and memory bottleneck occurs at the vocabulary part of the model.
Vocabulary is used to map a sequence of input tokens to embedding vectors: one embedding vector is stored for each word in vocabulary.English is de-facto a standard language for training conversational models, mostly for a large number of speakers and simple grammar.
In english, words usually have only a few word forms.
For example, verbs may occur in present and past tenses, nouns can have singular and plural forms.For many other languages, however, some words may have tens of word forms.
This is the case for Polish, Russian, French and many other languages.
For these languages storing all forms of frequent words in a vocabulary significantly increase computational costs.To reduce vocabulary size, we propose to normalize input and output sentences by putting them into a standard form.
Generated texts can then be converted into grammatically correct ones by solving morphological agreement task.
This can be efficiently done by a model proposed in this work.Our contribution is two-fold:• We propose a neural network architecture for performing morphological agreement in fusion languages such as French, Polish and Russian (Section 2).•
We introduce a new approach to building conversational models: generating normalized text and then performing morphological agreement with proposed model (Section 3);
In this paper we proposed a neural network model that can efficiently employ relationship between input and output words in morphological agreement task.
We also proposed a modification for this model that uses context sentence.
We apply this model for neural conversational model in a new pipeline: we use normalized question to generate normalized answer and then apply proposed model to obtain grammatically correct response.
This model showed better performance than character level neural conversational model based on assessors responses.We achieved significant improvement comparing to character-level, bigram and hierarchical sequenceto-sequence models on morphological agreement task for Russian, French and Polish languages.
Trained models seem to understand main grammatical rules and notions such as tenses, cases and pluralities. | Proposed architecture to solve morphological agreement task | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:683 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification.
This is achieved by expressing their dynamics as a truncated series of Legendre polynomials.
The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics.
The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods.
The resulting optimization scheme is fully time-parallel and results in a low memory footprint.
Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function.
The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase.
Neural Ordinary Differential Equations (ODE-Nets; Chen et al., 2018) can learn latent models from observations that are sparse in time.
This property has the potential to enhance the performance of neural network predictive models in applications where information is sparse in time and it is important to account for exact arrival times and delays.
In complex control systems and model-based reinforcement learning, planning over a long horizon is often needed, while high frequency feedback is necessary for maintaining stability (Franklin et al., 2014) .
Discrete-time models, including RNNs (Jain & Medsker, 1999) , often struggle to fully meet the needs of such applications due to the fixed time resolution.
ODE-Nets have been shown to provide superior performance with respect to classic RNNs on time series forecasting with sparse training data.
However, learning their parameters can be computationally intensive.
In particular, ODE-Nets are memory efficient but time inefficient.
In this paper, we address this bottleneck and propose a novel alternative strategy for system identification. | This paper proposes the use of spectral element methods for fast and accurate training of Neural Ordinary Differential Equations for system identification. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:684 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Exploration in sparse reward reinforcement learning remains an open challenge.
Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration.
Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely.
In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning.
Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific.
It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation.
We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite.
The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives.
A video of our experimental results can be found at https://gofile.io/?c=HpEwTd
.
Reinforcement learning (RL) agents learn on evaluative feedback (reward signals) instead of instructive feedback (ground truth labels), which takes the process of automating the development of intelligent problem-solving agents one step further (Sutton & Barto, 2018) .
With deep networks as powerful function approximators bringing traditional RL into high-dimensional domains, deep reinforcement learning (DRL) has shown great potential (Mnih et al., 2015; Schulman et al., 2017; Horgan et al., 2018) .
However, the success of DRL often relies on carefully shaped dense extrinsic reward signals.
Although shaping extrinsic rewards can greatly support the agent in finding solutions and shortening the interaction time, designing such dense extrinsic signals often requires substantial domain knowledge, and calculating them typically requires ground truth state information, both of which is hard to obtain in the context of robots acting in the real world.
When not carefully designed, the reward shape could sometimes serve as bias or even distractions and could potentially hinder the discovery of optimal solutions.
More importantly, learning on dense extrinsic rewards goes backwards on the progress of reducing supervision and could prevent the agent from taking full advantage of the RL framework.
In this paper, we consider terminal reward RL settings, where a signal is only given when the final goal is achieved.
When learning with only an extrinsic terminal reward indicating the task at hand, intelligent agents are given the opportunity to potentially discover optimal solutions even out of the scope of the well established domain knowledge.
However, in many real-world problems defining a task only by a terminal reward means that the learning signal can be extremely sparse.
The RL agent would have no clue about what task to accomplish until it receives the terminal reward for the first time by chance.
Therefore in those scenarios guided and structured exploration is crucial, which is where intrinsically-motivated exploration (Oudeyer & Kaplan, 2008; Schmidhuber, 2010) has recently gained great success (Pathak et al., 2017; Burda et al., 2018b) .
Most commonly in current state-of-the-art approaches, an intrinsic reward is added as a reward bonus to the extrinsic reward.
Maximizing this combined reward signal, however, results in a mixture policy that neither acts greedily with regard to extrinsic reward max-imization nor to exploration.
Furthermore, the non-stationary nature of the intrinsic signals could potentially lead to unstable learning on the combined reward.
In addition, current state-of-the-art methods have been mostly looking at local information calculated out of 1-step lookahead for the estimation of the intrinsic rewards, e.g. one step prediction error (Pathak et al., 2017) , or network distillation error of the next state (Burda et al., 2018b) .
Although those intrinsic signals can be propagated back to earlier states with temporal difference (TD) learning, it is not clear that this results in optimal long-term exploration.
We seek to address the aforementioned issues as follows:
1. We propose a hierarchical agent scheduled intrinsic drive (SID) that focuses on one motivation at a time: It learns two separate policies which maximize the extrinsic and intrinsic rewards respectively.
A high-level scheduler periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences.
Disentangling the two policies allows the agent to faithfully conduct either pure exploration or pure extrinsic task fulfillment.
Moreover, scheduling (even within an episode) implicitly increases the behavior policy space exponentially, which drastically differs from previous methods where the behavior policy could only change slowly due to the incremental nature of TD learning.
2. We introduce successor feature control (SFC), a novel intrinsic reward that is based on the concept of successor features.
This feature representation characterizes states through the features of all its successor states instead of looking at local information only.
This implicitly makes our method temporarily extended, which enables more structured and farsighted exploration that is crucial in exploration-challenging environments.
We note that both the proposed intrinsic reward SFC and the hierarchical exploration framework SID are without any task-specific components, and can be incorporated into existing DRL methods with minimal computation overhead.
We present experimental results in three sets of environments, evaluating our proposed agent in the domains of visual navigation and control from pixels, as well as its capabilities of finding optimal solutions under distraction.
In this paper, we investigate an alternative way of utilizing intrinsic motivation for exploration in DRL.
We propose a hierarchical agent SID that schedules between following extrinsic and intrinsic drives.
Moreover, we propose a new type of intrinsic reward SFC that is general and evaluates the intrinsic motivation based on longer time horizons.
We conduct experiments in three sets of environments and show that both our contributions SID and SFC help greatly in improving exploration efficiency.
We consider many possible research directions that could stem from this work, including designing more efficient scheduling strategies, incorporating several intrinsic drives (that are possibly orthogonal and complementary) instead of only one into SID, testing our framework in other control domains such as manipulation, combining the successor representation with learned feature representations and extending our evaluation onto real robotics systems. | A new intrinsic reward signal based on successor features and a novel way to combine extrinsic and intrinsic reward. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:685 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples.
Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings.
Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are quite different.
Instead, we propose to explicitly model a separate exploration policy for the task distribution.
Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier.
We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.
Reinforcement learning (RL) approaches have seen many successes in recent years, from mastering the complex game of Go BID10 to even discovering molecules BID8 .
However, a common limitation of these methods is their propensity to overfitting on a single task and inability to adapt to even slightly perturbed configuration BID12 .
On the other hand, humans have this astonishing ability to learn new tasks in a matter of minutes by using their prior knowledge and understanding of the underlying task mechanics.
Drawing inspiration from human behaviors, researchers have proposed to incorporate multiple inductive biases and heuristics to help the models learn quickly and generalize to unseen scenarios.
However, despite a lot of effort it has been difficult to approach human levels of data efficiency and generalization.Meta-RL tries to address these shortcomings by learning these inductive biases and heuristics from the data itself.
These inductive biases or heuristics can be induced in the model in various ways like optimization, policy initialization, loss function, exploration strategies, etc.
Recently, a class of policy initialization based meta-learning approaches have gained attention like Model Agnostic MetaLearning (MAML) BID1 .
MAML finds a good initialization for a policy that can be adapted to a new task by fine-tuning with policy gradient updates from a few samples of that task.Given the objective of meta RL algorithms to adapt to a new task from a few examples, efficient exploration strategies are crucial for quickly finding the optimal policy in a new environment.
Some recent works BID3 have tried to address this problem by using latent variables to model the distribution of exploration behaviors.
Another set of approaches BID11 BID9 focus on improving the credit assignment of the meta learning objective to the pre-update trajectory distribution.
However, all these prior works use one or few policy gradient updates to transition from preto post-update policy.
This limits the applicability of these methods to cases where the post-update (exploitation) policy is similar to the pre-update (exploration) policy and can be obtained with only a few updates.
Also, for cases where pre-and post-update policies are expected to exhibit different behaviors, large gradient updates may result in training instabilities and lack of convergence.
To address this problem, we propose to explicitly model a separate exploration policy for the distribution of tasks.
The exploration policy is trained to find trajectories that can lead to fast adaptation of the exploitation policy on the given task.
This formulation provides much more flexibility in training the exploration policy.
In the process, we also establish that, in order to adapt as quickly as possible to the new task, it is often more useful to use self-supervised or supervised learning approaches, where possible, to get more effective updates.
Unlike conventional meta-RL approaches, we proposed to explicitly model a separate exploration policy for the task distribution.
Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier.
Hence, as future work, we would like to explore the use of separate exploration and exploitation policies in other meta-learning approaches as well.
We showed that, through various experiments on both sparse and dense reward tasks, our model outperforms previous works while also being very stable during training.
This validates that using self-supervised techniques increases the stability of these updates thus allowing us to use a separate exploration policy to collect the initial trajectories.
Further, we also show that the variance reduction techniques used in the objective of exploration policy also have a huge impact on the performance.
However, we would like to note that the idea of using a separate exploration and exploitation policy is much more general and doesn't need to be restricted to MAML.
that to compute M β,z (s t , a t ) = w T β m β (s t , a t ).
Using the successor representations can effectively be seen as using a more accurate/powerful baseline than directly predicting the N-step returns using the (s t , a t )pair. | We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:686 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation.
Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable.
The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation.
These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model.
Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Pertinently, the "Edward Witten/String theory powered supersymmetric artificial neural network", is one wherein supersymmetric weights are sought.Many machine learning algorithms are not empirically shown to be exactly biologically plausible, i.e. Deep Neural Network algorithms, have not been observed to occur in the brain, but regardless, such algorithms work in practice in machine learning.Likewise, regardless of Supersymmetry's elusiveness at the LHC, as seen above, it may be quite feasible to borrow formal methods from strategies in physics even if such strategies are yet to show related physical phenomena to exist; thus it may be pertinent/feasible to try to construct a model that learns supersymmetric weights, as I proposed throughout this paper, following the progression of solution geometries going from ( ) to ( ) and onwards to ( | ) BID15 . | Generalizing backward propagation, using formal methods from supersymmetry. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:687 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective.
However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters.
In this paper, we argue for the importance of regularizing optimization trajectories directly.
We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks.
We show that using the co-natural gradient systematically reduces forgetting in continual learning.
Moreover, it helps combat overfitting when learning a new task in a low resource scenario.
It is good to have an end to journey toward; but it is the journey that matters, in the end.
We have presented the co-natural gradient, a technique that regularizes the optimization trajectory of models trained in a continual setting.
We have shown that the co-natural gradient stands on its own as an efficient approach for overcoming catastrophic forgetting, and that it effectively complements and stabilizes other existing techniques at a minimal cost.
We believe that the co-natural gradientand more generally, trajectory regularization -can serve as a solid bedrock for building agents that learn without forgetting. | Regularizing the optimization trajectory with the Fisher information of old tasks reduces catastrophic forgetting greatly | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:688 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of generating source code in a strongly typed,
Java-like programming language, given a label (for example a set of
API calls or types) carrying a small amount of information about the
code that is desired.
The generated programs are expected to respect a
`"realistic" relationship between programs and labels, as exemplified
by a corpus of labeled programs available during training.
Two challenges in such *conditional program generation* are that
the generated programs must satisfy a rich set of syntactic and
semantic constraints, and that source code contains many low-level
features that impede learning.
We address these problems by training
a neural generator not on code but on *program sketches*, or
models of program syntax that abstract out names and operations that
do not generalize across programs.
During generation, we infer a
posterior distribution over sketches, then concretize samples from
this distribution into type-safe programs using combinatorial
techniques.
We implement our ideas in a system for generating
API-heavy Java code, and show that it can often predict the entire
body of a method given just a few API calls or data types that appear
in the method.
Neural networks have been successfully applied to many generative modeling tasks in the recent past BID22 BID11 BID33 .
However, the use of these models in generating highly structured text remains relatively understudied.
In this paper, we present a method, combining neural and combinatorial techniques, for the condition generation of an important category of such text: the source code of programs in Java-like programming languages.The specific problem we consider is one of supervised learning.
During training, we are given a set of programs, each program annotated with a label, which may contain information such as the set of API calls or the types used in the code.
Our goal is to learn a function g such that for a test case of the form (X, Prog) (where Prog is a program and X is a label), g(X) is a compilable, type-safe program that is equivalent to Prog.This problem has immediate applications in helping humans solve programming tasks BID12 BID26 .
In the usage scenario that we envision, a human programmer uses a label to specify a small amount of information about a program that they have in mind.
Based on this information, our generator seeks to produce a program equivalent to the "target" program, thus performing a particularly powerful form of code completion.Conditional program generation is a special case of program synthesis BID19 BID32 , the classic problem of generating a program given a constraint on its behavior.
This problem has received significant interest in recent years BID2 BID10 .
In particular, several neural approaches to program synthesis driven by input-output examples have emerged BID3 BID23 BID5 .
Fundamentally, these approaches are tasked with associating a program's syntax with its semantics.
As doing so in general is extremely hard, these methods choose to only generate programs in highly controlled domainspecific languages.
For example, BID3 consider a functional language in which the only data types permitted are integers and integer arrays, control flow is linear, and there is a sum total of 15 library functions.
Given a set of input-output examples, their method predicts a vector of binary attributes indicating the presence or absence of various tokens (library functions) in the target program, and uses this prediction to guide a combinatorial search for programs.In contrast, in conditional program generation, we are already given a set of tokens (for example library functions or types) that appear in a program or its metadata.
Thus, we sidestep the problem of learning the semantics of the programming language from data.
We ask: does this simpler setting permit the generation of programs from a much richer, Java-like language, with one has thousands of data types and API methods, rich control flow and exception handling, and a strong type system?
While simpler than general program synthesis, this problem is still highly nontrivial.
Perhaps the central issue is that to be acceptable to a compiler, a generated program must satisfy a rich set of structural and semantic constraints such as "do not use undeclared variables as arguments to a procedure call" or "only use API calls and variables in a type-safe way".
Learning such constraints automatically from data is hard.
Moreover, as this is also a supervised learning problem, the generated programs also have to follow the patterns in the data while satisfying these constraints.We approach this problem with a combination of neural learning and type-guided combinatorial search BID6 .
Our central idea is to learn not over source code, but over tree-structured syntactic models, or sketches, of programs.
A sketch abstracts out low-level names and operations from a program, but retains information about the program's control structure, the orders in which it invokes API methods, and the types of arguments and return values of these methods.
We propose a particular kind of probabilistic encoder-decoder, called a Gaussian Encoder-Decoder or GED, to learn a distribution over sketches conditioned on labels.
During synthesis, we sample sketches from this distribution, then flesh out these samples into type-safe programs using a combinatorial method for program synthesis.
Doing so effectively is possible because our sketches are designed to contain rich information about control flow and types.We have implemented our approach in a system called BAYOU.
1 We evaluate BAYOU in the generation of API-manipulating Android methods, using a corpus of about 150,000 methods drawn from an online repository.
Our experiments show that BAYOU can often generate complex method bodies, including methods implementing tasks not encountered during training, given a few tokens as input.
We have given a method for generating type-safe programs in a Java-like language, given a label containing a small amount of information about a program's code or metadata.
Our main idea is to learn a model that can predict sketches of programs relevant to a label.
The predicted sketches are concretized into code using combinatorial techniques.
We have implemented our ideas in BAYOU, a system for the generation of API-heavy code.
Our experiments indicate that the system can often generate complex method bodies from just a few tokens, and that learning at the level of sketches is key to performing such generation effectively.An important distinction between our work and classical program synthesis is that our generator is conditioned on uncertain, syntactic information about the target program, as opposed to hard constraints on the program's semantics.
Of course, the programs that we generate are type-safe, and therefore guaranteed to satisfy certain semantic constraints.
However, these constraints are invariant across generation tasks; in contrast, traditional program synthesis permits instance-specific semantic constraints.
Future work will seek to condition program generation on syntactic labels as well as semantic constraints.
As mentioned earlier, learning correlations between the syntax and semantics of programs written in complex languages is difficult.
However, the approach of first generating and then concretizing a sketch could reduce this difficulty: sketches could be generated using a limited amount of semantic information, and the concretizer could use logic-based techniques BID2 BID10 to ensure that the programs synthesized from these sketches match the semantic constraints exactly.
A key challenge here would be to calibrate the amount of semantic information on which sketch generation is conditioned.
A THE AML LANGUAGE AML is a core language that is designed to capture the essence of API usage in Java-like languages.
Now we present this language.
DISPLAYFORM0 AML uses a finite set of API data types.
A type is identified with a finite set of API method names (including constructors); the type for which this set is empty is said to be void.
Each method name a is associated with a type signature (τ 1 , . . . , τ k ) → τ 0 , where τ 1 , . . . , τ k are the method's input types and τ 0 is its return type.
A method for which τ 0 is void is interpreted to not return a value.
Finally, we assume predefined universes of constants and variable names.The grammar for AML is as in FIG4 .
Here, x, x 1 , . . . are variable names, c is a constant, and a is a method name.
The syntax for programs Prog includes method calls, loops, branches, statement sequencing, and exception handling.
We use variables to feed the output of one method into another, and the keyword let to store the return value of a call in a fresh variable.
Exp stands for (objectvalued) expressions, which include constants, variables, method calls, and let-expressions such as "let x = Call : Exp", which stores the return value of a call in a fresh variable x, then uses this binding to evaluate the expression Exp. (Arithmetic and relational operators are assumed to be encompassed by API methods.)The operational semantics and type system for AML are standard, and consequently, we do not describe these in detail. | We give a method for generating type-safe programs in a Java-like language, given a small amount of syntactic information about the desired code. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:689 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization.
In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks.
Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization.
We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.
Deep neural networks have enjoyed great success in learning across a wide variety of tasks.
They played a crucial role in the seminal work of Krizhevsky et al. (2012) , starting an arms race of training larger networks with more hidden units, in pursuit of better test performance (He et al., 2016) .
In fact the networks used in practice are over-parametrized to the extent that they can easily fit random labels to the data (Zhang et al., 2017) .
Even though they have such a high capacity, when trained with real labels they achieve smaller generalization error.Traditional wisdom in learning suggests that using models with increasing capacity will result in overfitting to the training data.
Hence capacity of the models is generally controlled either by limiting the size of the model (number of parameters) or by adding an explicit regularization, to prevent from overfitting to the training data.
Surprisingly, in the case of neural networks we notice that increasing the model size only helps in improving the generalization error, even when the networks are trained without any explicit regularization -weight decay or early stopping (Lawrence et al., 1998; Srivastava et al., 2014; Neyshabur et al., 2015c) .
In particular, Neyshabur et al. (2015c) observed that training on models with increasing number of hidden units lead to decrease in the test error for image classification on MNIST and CIFAR-10.
Similar empirical observations have been made over a wide range of architectural and hyper-parameter choices (Liang et al., 2017; Novak et al., 2018; Lee et al., 2018) .
What explains this improvement in generalization with over-parametrization?
What is the right measure of complexity of neural networks that captures this generalization phenomenon?Complexity
measures that depend on the total number of parameters of the network, such as VC bounds, do not capture this behavior as they increase with the size of the network. Existing works
suggested different norm, margin and sharpness based measures, to measure the capacity of neural networks, in an attempt to explain the generalization behavior observed in practice (Neyshabur et al., 2015b; Keskar et al., 2017; Dziugaite & Roy, 2017; Neyshabur et al., 2017; Bartlett et al., 2017; We observe that even when after network is large enough to completely fit the training data(reference line), the test error continues to decrease for larger networks. Middle panel:
Training fully connected feedforward network with single hidden layer on CIFAR-10. We observe the
same phenomena as the one observed in ResNet18 architecture. Right panel: Unit
capacity captures the complexity of a hidden unit and unit impact captures the impact of a hidden unit on the output of the network, and are important factors in our capacity bound (Theorem 1). We observe empirically
that both average unit capacity and average unit impact shrink with a rate faster than 1/ √ h where h is the number of hidden units. Please see Supplementary
Section A for experiments settings. BID0 Golowich et al., 2018
; BID0 . In particular, Bartlett et
al. (2017) showed a margin based generalization bound that depends on the spectral norm and 1,2 norm of the layers of a network. However, as shown in Neyshabur
et al. (2017) and in FIG6 , these complexity measures fail to explain why over-parametrization helps, and in fact increase with the size of the network. Dziugaite & Roy (2017) numerically
evaluated a generalization bound based on PAC-Bayes. Their reported numerical generalization
bounds also increase with the increasing network size. These existing complexity measures increase
with the size of the network, even for two layer networks, as they depend on the number of hidden units either explicitly, or the norms in their measures implicitly depend on the number of hidden units for the networks used in practice (Neyshabur et al., 2017)
In this paper we present a new capacity bound for neural networks that empirically decreases with the increasing number of hidden units, and could potentially explain the better generalization performance of larger networks.
In particular, we focused on understanding the role of width in the generalization behavior of two layer networks.
More generally, understanding the role of depth and the interplay between depth and width in controlling capacity of networks, remain interesting directions for future study.
We also provided a matching lower bound for the capacity improving on the current lower bounds for neural networks.
While these bounds are useful for relative comparison between networks of different size, their absolute values still remain larger than the number of training samples, and it is of interest to get bounds with numerically smaller values.In this paper we do not address the question of whether optimization algorithms converge to low complexity networks in the function class considered in this paper, or in general how does different hyper parameter choices affect the complexity of the recovered solutions.
It is interesting to understand the implicit regularization effects of the optimization algorithms (Neyshabur et al., 2015a; Gunasekar et al., 2017; Soudry et al., 2018) for neural networks, which we leave for future work. | We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:69 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose an approach for sequence modeling based on autoregressive normalizing flows.
Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics.
This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques.
We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models.
Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models.
Data often contain sequential structure, providing a rich signal for learning models of the world.
Such models are useful for learning self-supervised representations of sequences (Li & Mandt, 2018; Ha & Schmidhuber, 2018) and planning sequences of actions (Chua et al., 2018; Hafner et al., 2019) .
While sequential models have a longstanding tradition in probabilistic modeling (Kalman et al., 1960) , it is only recently that improved computational techniques, primarily deep networks, have facilitated learning such models from high-dimensional data (Graves, 2013) , particularly video and audio.
Dynamics in these models typically contain a combination of stochastic and deterministic variables (Bayer & Osendorfer, 2014; Chung et al., 2015; Gan et al., 2015; Fraccaro et al., 2016) , using simple distributions (e.g. Gaussian) to directly model the likelihood of data observations.
However, attempting to capture all sequential dependencies with relatively unstructured dynamics may make it more difficult to learn such models.
Intuitively, the model should use its dynamical components to track changes in the input instead of simultaneously modeling the entire signal.
Rather than expanding the computational capacity of the model, we seek a method for altering the representation of the data to provide a more structured form of dynamics.
To incorporate more structured dynamics, we propose an approach for sequence modeling based on autoregressive normalizing flows (Kingma et al., 2016; Papamakarios et al., 2017) , consisting of one or more autoregressive transforms in time.
A single transform is equivalent to a Gaussian autoregressive model.
However, by stacking additional transforms or latent variables on top, we can arrive at more expressive models.
Each autoregressive transform serves as a moving reference frame in which higher-level structure is modeled.
This provides a general mechanism for separating different forms of dynamics, with higher-level stochastic dynamics modeled in the simplified space provided by lower-level deterministic transforms.
In fact, as we discuss, this approach generalizes the technique of modeling temporal derivatives to simplify dynamics estimation (Friston, 2008 ).
We empirically demonstrate this approach, both with standalone autoregressive normalizing flows, as well as by incorporating these flows within more flexible sequential latent variable models.
While normalizing flows have been applied in a few sequential contexts previously, we emphasize the use of these models in conjunction with sequential latent variable models.
We present experimental results on three benchmark video datasets, showing improved quantitative performance in terms of log-likelihood.
In formulating this general technique for improving dynamics estimation in the framework of normalizing flows, we also help to contextualize previous work.
Figure 1: Affine Autoregressive Transforms.
Computational diagrams for forward and inverse affine autoregressive transforms (Papamakarios et al., 2017) .
Each y t is an affine transform of x t , with the affine parameters potentially non-linear functions of x <t .
The inverse transform is capable of converting a correlated input, x 1:T , into a less correlated variable, y 1:T .
We have presented a technique for improving sequence modeling based on autoregressive normalizing flows.
This technique uses affine transforms to temporally decorrelate sequential data, thereby simplifying the estimation of dynamics.
We have drawn connections to classical approaches, which involve modeling temporal derivatives.
Finally, we have empirically shown how this technique can improve sequential latent variable models. | We show how autoregressive flows can be used to improve sequential latent variable models. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:690 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs.
This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks.
We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks.
These vulnerabilities are especially apparent for neural network based systems.
As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net.
We then attack this system using simple gradient methods.
Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system.
Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.
Machine learning systems are easily manipulated by adversarial attacks, in which small perturbations to input data cause large changes to the output of a model.
Such attacks have been demonstrated on a number of potentially sensitive systems, largely in an idealized academic context, and occasionally in the real-world (Tencent, 2019; Kurakin et al., 2016; Athalye et al., 2017; Eykholt et al., 2017; Yakura & Sakuma, 2018; Qin et al., 2019) .
Copyright detection systems are among the most widely used machine learning systems in industry, and the security of these systems is of foundational importance to some of the largest companies in the world.
Despite their importance, copyright systems have gone largely unstudied by the ML security community.
Common approaches to copyright detection extract features, called fingerprints, from sampled video or audio, and then match these features with a library of known fingerprints.
Examples include YouTube's Content ID, which flags copyrighted material on YouTube and enables copyright owners to monetize and control their content.
At the time of writing this paper, more than 100 million dollars have been spent on Content ID, which has resulted in more than 3 billion dollars in revenue for copyright holders (Manara, 2018) .
Closely related tools such as Google Jigsaw detect and remove videos that promote terrorism or jeopardized national security.
There is also a regulatory push for the use of copyright detection systems; the recent EU Copyright Directive requires any service that allows users to post text, sound, or video to implement a copyright filter.
A wide range of copyright detection systems exist, most of which are proprietary.
It is not possible to demonstrate attacks against all systems, and this is not our goal.
Rather, the purpose of this paper is to discuss why copyright detectors are especially vulnerable to adversarial attacks and establish how existing attacks in the literature can potentially exploit audio and video copyright systems.
As a proof of concept, we demonstrate an attack against real-world copyright detection systems for music.
To do this, we reinterpret a simple version of the well-known "Shazam" algorithm for music fingerprinting as a neural network and build a differentiable implementation of it in TensorFlow (Abadi et al., 2016) .
By using a gradient-based attack and an objective that is designed to achieve good transferability to black-box models, we create adversarial music that is easily recognizable to a human, while evading detection by a machine.
With sufficient perturbations, our adversarial music successfully fools industrial systems, 1 including the AudioTag music recognition service (AudioTag, 2009), and YouTube's Content ID system (Google, 2019) .
Copyright detection systems are an important category of machine learning methods, but the robustness of these systems to adversarial attacks has not been addressed yet by the machine learning community.
We discussed the vulnerability of copyright detection systems, and explain how different kinds of systems may be vulnerable to attacks using known methods.
As a proof of concept, we build a simple song identification method using neural network primitives and attack it using well-known gradient methods.
Surprisingly, attacks on this model transfer well to online systems.
Note that none of the authors of this paper are experts in audio processing or fingerprinting systems.
The implementations used in this study are far from optimal, and we expect that attacks can be strengthened using sharper technical tools, including perturbation types that are less perceptible to the human ear.
Furthermore, we are doing transfer attacks using fairly rudimentary surrogate models that rely on hand-crafted features, while commercial system likely rely on full trainable neural nets.
Our goal here is not to facilitate copyright evasion, but rather to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection and content control systems to attack.
A number of defenses already exist that can be utilized for this purpose, including adversarial training. | Adversarial examples can fool YouTube's copyright detection system | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:691 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space.
Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state.
However, in existing implementations of EP, the learning rule is not local in time:
the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically.
This is a major impediment to the biological plausibility of EP and its efficient hardware implementation.
In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time.
We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1).
We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections.
We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training.
These results bring EP a step closer to biology while maintaining its intimate link with backpropagation.
A motivation for deep learning is that a few simple principles may explain animal intelligence and allow us to build intelligent machines, and learning paradigms must be at the heart of such principles, creating a synergy between neuroscience and Artificial Intelligence (AI) research.
In the deep learning approach to AI (LeCun et al., 2015) , backpropagation thrives as the most powerful algorithm for training artificial neural networks.
Unfortunately, its implementation on conventional computer or dedicated hardware consumes more energy than the brain by several orders of magnitude (Strubell et al., 2019) .
One path towards reducing the gap between brains and machines in terms of power consumption is by investigating alternative learning paradigms relying on locally available information, which would allow radically different hardware implementations: such local learning rules could be used for the development of extremely energy efficient learning-capable hardware.
Investigating such bioplausible learning schemes with real-world applicability is therefore of interest not only for neuroscience, but also for developing neuromorphic computing hardware that takes inspiration from information-encoding, dynamics and topology of the brain to reach fast and energy efficient AI (Ambrogio et al., 2018; Romera et al., 2018) .
In these regards, Equilibrium Propagation (EP) is an alternative style of computation for estimating error gradients that presents significant advantages (Scellier and Bengio, 2017) .
EP belongs to the family of contrastive Hebbian learning (CHL) algorithms (Ackley et al., 1985; Movellan, 1991; Hinton, 2002) and therefore benefits from an important feature of these algorithms: neural dynamics and synaptic updates depend solely on information that is locally available.
As a CHL algorithm, EP applies to convergent RNNs, i.e. RNNs that are fed by a static input and converge to a steady state.
Training such a convergent RNN consists in adjusting the weights so that the steady state corresponding to an input x produces output values close to associated targets y.
CHL algorithms proceed in two phases: in the first phase, neurons evolve freely without external influence and settle to a (first) steady state; in the second phase, the values of output neurons are influenced by the target y and the neurons settle to a second steady state.
CHL weight updates consist in a Hebbian rule strengthening the connections between co-activated neurons at the first steady state, and an anti-Hebbian rule with opposite effect at the second steady state.
A difference between Equilibrium Propagation and standard CHL algorithms is that output neurons are not clamped in the second phase but elastically pulled towards the target y.
A second key property of EP is that, unlike CHL and other related algorithms, it is intimately linked to backpropagation.
It has been shown that synaptic updates in EP follow gradients of recurrent backpropagation (RBP) and backpropagation through time (BPTT) (Ernoult et al., 2019) .
This makes it especially attractive to bridge the gap between neural networks developed by neuroscientists, neuromorphic researchers and deep learning researchers.
Nevertheless, the bioplausibility of EP still undergoes two major limitations.
First, although EP is local in space, it is non-local in time.
In all existing implementations of EP the weight update is performed after the dynamics of the second phase have converged, when the first steady state is no longer physically available.
Thus the first steady state has to be artificially stored.
Second, the network dynamics have to derive from a primitive function, which is equivalent to the requirement of symmetric weights in the Hopfield model.
These two requirements are biologically unrealistic and also hinder the development of efficient EP computing hardware.
In this work, we propose an alternative implementation of EP (called C-EP) which features temporal locality, by enabling synaptic dynamics to occur throughout the second phase, simultaneously with neural dynamics.
We then address the second issue by adapting C-EP to systems having asymmetric synaptic connections, taking inspiration from Scellier et al. (2018) ; we call this modified version C-VF.
More specifically, the contributions of the current paper are the following:
• We introduce Continual Equilibrium Propagation (C-EP, Section 3.1-3.2), a new version of EP with continual weight updates: the weights of the network are adjusted continually in the second phase of training using local information in space and time.
Neuron steady states do not need to be stored after the first phase, in contrast with standard EP where a global weight update is performed at the end of the second phase.
Like standard EP, the C-EP algorithm applies to networks whose synaptic connections between neurons are assumed to be symmetric and tied.
• We show mathematically that, provided that the changes in synaptic strengths are sufficiently slow (i.e. the learning rates are sufficiently small), at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss obtained with BPTT (Theorem 1 and Fig. 2 , Section 3.3).
We call this property the Gradient Descending Dynamics (GDD) property, for consistency with the terminology used in Ernoult et al. (2019) .
• We demonstrate training with C-EP on MNIST, with accuracy approaching the one obtained with standard EP (Section 4.2).
• Finally, we adapt our C-EP algorithm to the more bio-realistic situation of a neural network with asymmetric connections between neurons.
We call this modified version C-VF as it is inspired by the Vector Field method proposed in Scellier et al. (2018) .
We demonstrate this approach on MNIST, and show numerically that the training performance is correlated with the satisfaction of Gradient Descending Dynamics (Section 4.3).
For completeness, we also show how the Recurrent Backpropagation (RBP) algorithm of Almeida (1987) ; Pineda (1987) relates to C-EP, EP and BPTT.
We illustrate the equivalence of these four algorithms on a simple analytical model ( Fig. 3 ) and we develop their relationship in Appendix A.
Equilibrium Propagation is an algorithm that leverages the dynamical nature of neurons to compute weight gradients through the physics of the neural network.
C-EP embraces simultaneous synapse and neuron dynamics, resolving the initial need of artificial memory units for storing the neuron values between different phases.
The C-EP framework preserves the equivalence with Backpropagation Through Time: in the limit of sufficiently slow synaptic dynamics (i.e. small learning rates), the system satisfies Gradient Descending Dynamics (Theorem 1).
Our experimental results confirm this theorem.
When training our vanilla RNN with symmetric weights with C-EP while ensuring convergence in 100 epochs, a modest reduction in MNIST accuracy is seen with regards to standard EP.
This accuracy reduction can be eliminated by using smaller learning rates and rescaling up the total weight update at the end of the second phase (Appendix F.2).
On top of extending the theory of Ernoult et al. (2019) , Theorem 1 also appears to provide a statistically robust tool for C-EP based learning.
Our experimental results show as in Ernoult et al. (2019) that, for a given network with specified neuron and synapse dynamics, the more the updates of Equilibrium Propagation follow the gradients provided by Backpropagation Through Time before training (in terms of angle in this work), the better this network can learn.
Our C-EP and C-VF algorithms exhibit features reminiscent of biology.
C-VF extends C-EP training to RNNs with asymmetric weights between neurons, as is the case in biology.
Its learning rule, local in space and time, is furthermore closely acquainted to Spike Timing Dependent Plasticity (STDP), a learning rule widely studied in neuroscience, inferred in vitro and in vivo from neural recordings in the hippocampus (Dan and Poo, 2004) .
In STDP, the synaptic strength is modulated by the relative timings of pre and post synaptic spikes within a precise time window (Bi and Poo, 1998; 2001) .
Each randomly selected synapse corresponds to one color.
While dashed and continuous lines coincide for standard EP, they split apart upon untying the weights and using continual updates.
Strikingly, the same rule that we use for C-VF learning can approximate STDP correlations in a rate-based formulation, as shown through numerical experiments by .
From this viewpoint our work brings EP a step closer to biology.
However, C-EP and C-VF do not aim at being models of biological learning per se, in that it would account for how the brain works or how animals learn, for which Reinforcement Learning might be a more suited learning paradigm.
The core motivation of this work is to propose a fully local implementation of EP, in particular to foster its hardware implementation.
When computed on a standard computer, due to the use of small learning rates to mimic analog dynamics within a finite number of epochs, training our models with C-EP and C-VF entail long simulation times.
With a Titan RTX GPU, training a fully connected architecture on MNIST takes 2 hours 39 mins with 1 hidden layer and 10 hours 49 mins with 2 hidden layers.
On the other hand, C-EP and C-VF might be particularly efficient in terms of speed and energy consumption when operated on neuromorphic hardware that employs analog device physics (Ambrogio et al., 2018; Romera et al., 2018) .
To this purpose, our work can provide an engineering guidance to map our algorithm onto a neuromorphic system.
Fig.
5 (a) shows that hyperparameters should be tuned so that before training, C-EP updates stay within 90
• of the gradients provided by BPTT.
More concretely in practice, it amounts to tune the degree of symmetry of the dynamics, for instance the angle between forward and backward weights -see Fig. 4 .1.
Our work is one step towards bridging Equilibrium Propagation with neuromorphic computing and thereby energy efficient implementations of gradient-based learning algorithms.
A PROOF OF THEOREM 1
In this appendix, we prove Theorem 1, which we recall here.
Theorem 1 (GDD Property).
Let s 0 , s 1 , . . . , s T be the convergent sequence of states and denote s * = s T the steady state.
Further assume that there exists some step K where 0 < K ≤ T such that s * = s T = s T −1 = . . . s T −K .
Then, in the limit η → 0 and β → 0, the first K normalized updates in the second phase of C-EP are equal to the negatives of the first K gradients of BPTT, i.e.
A.1
A SPECTRUM OF FOUR COMPUTATIONALLY EQUIVALENT LEARNING ALGORITHMS Proving Theorem 1 amounts to prove the equivalence of C-EP and BPTT.
In fact we can prove the equivalence of four algorithms, which all compute the gradient of the loss:
1. Backpropagation Through Time (BPTT), presented in Section B.2,
2. Recurrent Backpropagation (RBP), presented in Section B.3,
3. Equilibrium Propagation (EP), presented in Section 2,
4. Equilibrium Propagation with Continual Weight Updates (C-EP), introduced in Section 3.
In this spectrum of algorithms, BPTT is the most practical algorithm to date from the point of view of machine learning, but also the less biologically realistic.
In contrast, C-EP is the most realistic in terms of implementation in biological systems, while it is to date the least practical and least efficient for conventional machine learning (computations on standard Von-Neumann hardware are considerably slower due to repeated parameter updates, requiring memory access at each time-step of the second phase). | We propose a continual version of Equilibrium Propagation, where neuron and synapse dynamics occur simultaneously throughout the second phase, with theoretical guarantees and numerical simulations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:692 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
There are two main lines of research on visual reasoning: neural module network (NMN) with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space.
The former excels in interpretability and compositionality, while the latter usually achieves better performance due to model flexibility and parameter efficiency.
In order to bridge the gap of the two, we present Meta Module Network (MMN), a novel hybrid approach that can efficiently utilize a Meta Module to perform versatile functionalities, while preserving compositionality and interpretability through modularized design.
The proposed model first parses an input question into a functional program through a Program Generator.
Instead of handcrafting a task-specific network to represent each function like traditional NMN, we use Recipe Encoder to translate the functions into their corresponding recipes (specifications), which are used to dynamically instantiate the Meta Module into Instance Modules.
To endow different instance modules with designated functionality, a Teacher-Student framework is proposed, where a symbolic teacher pre-executes against the scene graphs to provide guidelines for the instantiated modules (student) to follow.
In a nutshell, MMN adopts the meta module to increase its parameterization efficiency, and uses recipe encoding to improve its generalization ability over NMN.
Experiments conducted on the GQA benchmark demonstrates that: (1) MMN achieves significant improvement over both NMN and monolithic network baselines; (2) MMN is able to generalize to unseen but related functions.
Visual reasoning requires a model to learn strong compositionality and generalization abilities, i.e., understanding and answering compositional questions without having seen similar semantic compositions before.
Such compositional visual reasoning is a hallmark for human intelligence that endows people with strong problem-solving skills given limited prior knowledge.
Recently, neural module networks (NMNs) (Andreas et al., 2016a; Hu et al., 2017; Johnson et al., 2017b; Hu et al., 2018; Mao et al., 2019) have been proposed to perform such complex reasoning tasks.
First, NMN needs to pre-define a set of functions and explicitly encode each function into unique shallow neural networks called modules, which are composed dynamically to build an instance-specific network for each input question.
This approach has high compositionality and interpretability, as each module is specifically designed to accomplish a specific sub-task and multiple modules can be combined to perform unseen combinations during inference.
However, with increased complexity of the task, the set of functional semantics and modules also scales up.
As observed in Hudson & Manning (2018) , this leads to higher model complexity and poorer scalability on more challenging scenarios.
Another line of research on visual reasoning is focused on designing monolithic network architecture, such as MFB (Yu et al., 2017) , BAN (Kim et al., 2018) , DCN (Nguyen & Okatani, 2018) , and MCAN .
These black-box methods have achieved state-of-the-art performance on more challenging realistic image datasets like VQA (Hudson & Manning, 2019a) , surpassing the aforementioned NMN approach.
They use a unified neural network to learn general-purpose reasoning skills (Hudson & Manning, 2018) , which is known to be more flexible and scalable without making strict assumption about the inputs or designing operation-specific networks for the predefined functional semantics.
As the reasoning procedure is conducted in the latent feature space, the reasoning process is difficult to interpret.
Such a model also lacks the ability to capture the compositionality of questions, thus suffering from poorer generalizability than module networks.
In this paper, we propose Meta Module Network that bridges the gap between monolithic networks and traditional module networks.
Our model is built upon a Meta Module, which can be instantiated into an instance module performing specific functionalities.
Our approach significantly outperforms baseline methods and achieves comparable performance to state of the art.
Detailed error analysis shows that relation modeling over scene graph could further boost MMN for higher performance.
For future work, we plan to incorporate scene graph prediction into the proposed framework.
A APPENDIX | We propose a new Meta Module Network to resolve some of the restrictions of previous Neural Module Network to achieve strong performance on realistic visual reasoning dataset. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:693 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a new perspective on adversarial attacks against deep reinforcement learning agents.
Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy.
It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario.
We show its effectiveness on Atari 2600 games in the novel read-only setting.
In the latter, the adversary cannot directly modify the agent's state -its representation of the environment- but can only attack the agent's observation -its perception of the environment.
Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings.
We are interested in the problem of attacking sequential control systems that use deep neural policies.
In the context of supervised learning, previous work developed methods to attack neural classifiers by crafting so-called adversarial examples.
These are malicious inputs particularly successful at fooling deep networks with high-dimensional input-data like images.
Within the framework of sequential-decision-making, previous works used these adversarial examples only to break neural policies.
Yet the attacks they build are rarely applicable in a real-time setting as they require to craft a new adversarial input at each time step.
Besides, these methods use the strong assumption of having a write-access to what we call the agent's inner state -the actual input of the neural policy built by the algorithm from the observations-.
When taking this assumption, the adversary -the algorithm attacking the agent-is not placed at the interface between the agent and the environment where the system is the most vulnerable.
We wish to design an attack with a more general purpose than just shattering a neural policy as well as working in a more realistic setting.
Our main contribution is CopyCAT, an algorithm for taking full-control of neural policies.
It produces a simple attack that is: (1) targeted towards a policy, i.e., it aims at matching a neural policy's behavior with the one of an arbitrary policy; (2) only altering observation of the environment rather than complete agent's inner state; (3) composed of a finite set of pre-computed state-independent masks.
This way it requires no additional time at inference hence it could be usable in a real-time setting.
We introduce CopyCAT in the white-box scenario, with read-only access to the weights and the architecture of the neural policy.
This is a realistic setting as prior work showed that after training substitute models, one could transfer an attack computed on these to the inaccessible attacked model (Papernot et al., 2016) .
The context is the following: (1) We are given any agent using a neuralnetwork for decision-making (e.g., the Q-network for value-based agents, the policy network for actor-critic or imitation learning methods) and a target policy we want the agent to follow.
(2) The only thing one can alter is the observation the agent receives from the environment and not the full input of the neural controller (the inner state).
In other words, we are granted a read-only access to the agent's inner workings.
In the case of Atari 2600 games, the agents builds its inner state by stacking the last four observations.
Attacking the agent's inner state means writing in the agent's memory of the last observations.
(3) The computed attack should be inferred fast enough to be used in real-time.
We stress the fact that targeting a policy is a more general scheme than untargeted attacks where the goal is to stop the agent from taking its preferred action (hoping for it to take the worst).
It is also more general than the targeted scheme of previous works where one wants the agent to take its least preferred action or to reach a specific state.
In our setting, one can either hard-code or train a target policy.
This policy could be minimizing the agent's true reward but also maximizing the reward for another task.
For instance, this could mean taking full control of an autonomous vehicle, possibly bringing it to any place of your choice.
We exemplify this approach on the classical benchmark of Atari 2600 games.
We show that taking control of a trained deep RL agent so that its behavior matches a desired policy can be done with this very simple attack.
We believe such an attack reveals the vulnerability of autonomous agents.
As one could lure them into following catastrophic behaviors, autonomous cars, robots or any agent with high dimensional inputs are exposed to such manipulation.
This suggests that it would be worth studying new defense mechanisms that could be specific to RL agents, but this is out of the scope of this paper.
In this work, we built and showed the effectiveness of CopyCAT, a simple algorithm designed to attack neural policies in order to manipulate them.
We showed its ability to lure a policy into having a desired behavior with a finite set of additive masks, usable in a real-time setting while being applied only on observations of the environment.
We demonstrated the effectiveness of these universal masks in Atari games.
As this work shows that one can easily manipulate a policy's behavior, a natural direction of work is to develop robust algorithms, either able to keep their normal behaviors when attacked or to detect attacks to treat them appropriately.
Notice however that in a sequential-decisionmaking setting, detecting an attack is not enough as the agent cannot necessarily stop the process when detecting an attack and may have to keep outputting actions for incoming observations.
It is thus an exciting direction of work to develop algorithm that are able to maintain their behavior under such manipulating attacks.
Another interesting direction of work in order to build real-life attacks is to test targeted attacks on neural policies in the black-box scenario, with no access to network's weights and architecture.
However, targeted adversarial examples are harder to compute than untargeted ones and we may experience more difficulties in reinforcement learning than supervised learning.
Indeed, learned representations are known to be less interpretable and the variability between different random seeds to be higher than in supervised learning.
Different policies trained with the same algorithm may thus lead to S → A mappings with very different decision boundaries.
Transferring targeted examples may not be easy and would probably require to train imitation models to obtain mappings similar to π in order to compute transferable adversarial examples. | We propose a new attack for taking full control of neural policies in realistic settings. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:694 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems.
Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space.
This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones.
Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings.
In addition, CGH initiates a new marketing strategy through mining potential users by a generative step.
To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data.
Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing.
With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in.
Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous.
Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc.
Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation.
However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues.
Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems.
Specifically, they learned real latent factors by incorporating the side information into the interactive data.
Such as Collaborative Deep Learning (CDL) (Wang et al., 2015) , Visual Bayesian Personalized Ranking (VBPR) (He & McAuley, 2016) , Collaborative Topic modeling for Recommedation (CTR) (Wang & Blei, 2011) , and the DropoutNet for addressing cold start (DropoutNet) (Volkovs et al., 2017) , ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec) (Tsukuda et al., 2019) .
All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets.
discrete objectives.
Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) and the Iterative Quantization(ITQ) (Zhou & Zha, 2012) .
To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (Zhang et al., 2016; Zhang et al., 2018; .
However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions.
In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) (Dai et al., 2017) .
In marketing area, mining potential customers is crucial to the e-commerce.
CGH provides a strategy to discover potential users by the generative step.
To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL.
Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform (Papies et al., 2017) .
Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set.
By recommending a new product to the potential users who might be interested in but didn't plan to buy, further e-commerce strategies can be developed to attract those potential users.
We organize the paper as follows: Section 2 introduce the main techniques of CGH.
We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL (Wang et al., 2015) and DropoutNet (Volkovs et al., 2017) ; we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3.
Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1.
Section 3 presents the experimental results for marketing analysis and recommendation accuracy in various settings.
Section 4 concludes the paper.
The main contributions of this paper are summarized as follows:
(1) We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation.
(2) We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development.
(3) We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets.
In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation.
The two main contributions are put forward in this paper: (1) we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; (2) we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; (3) we evaluate the proposed scheme on two the public datasets, the experimental results show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation. | It can generate effective hash codes for efficient cold-start recommendation and meanwhile provide a feasible marketing strategy. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:695 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems.
In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem.
Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem.
The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.
Recent advances in neural network models for discrete structures have given rise to a new field in Representation Learning known as the Neuro-Symbolic methods.
Generally speaking, these methods aim at marrying the classical symbolic techniques in Formal Methods and Computer Science to Deep Learning in order to benefit both disciplines.
One of the most exciting outcomes of this marriage is the emergence of neural models for learning how to solve the classical combinatorial optimization problems in Computer Science.
The key observation behind many of these models is that in practice, for a given class of combinatorial problems in a specific domain, the problem instances are typically drawn from a certain (unknown) distribution.
Therefore if a sufficient number of problem instances are available, then in principle, Statistical Learning should be able to extract the common structures among these instances and produce meta-algorithms (or models) that would, in theory, outperform the carefully hand-crafted algorithms.There have been two main approaches to realize this idea in practice.
In the first group of methods, the general template of the solver algorithm (which is typically the greedy strategy) is directly imported from the classical heuristic search algorithm, and the Deep Learning component is only tasked to learn the optimal heuristics within this template.
In combination with Reinforcement Learning, such strategy has been shown to be quite effective for various NP-complete problems -e.g. BID16 .
Nevertheless, the resulted model is bounded by the greedy strategy, which is sub-optimal in general.
The alternative is to go one step further and let Deep Learning figure out the entire solution structure from scratch.
This approach is quite attractive as it allows the model not only learn the optimal (implicit) decision heuristics but also the optimal search strategies beyond the greedy strategy.
However, this comes at a price: training such models can be quite challenging!
To do so, a typical candidate is Reinforcement Learning (Policy Gradient, in specific), but such techniques are usually sample inefficient -e.g. BID4 .
As an alternative method for training, more recently BID24 have proposed using the latent representations learned for the binary classification of the Satisfiability (SAT) problem to actually produce a neural SAT solver model.
Even though using such proxy for learning a SAT solver is an interesting observation and provides us with an end-to-end differentiable architecture, the model is not directly trained toward solving a SAT problem (unlike Reinforcement Learning).
As we will see later in this paper, that can indeed result in poor generalization and sub-optimal models.In this paper, we propose a neural Circuit-SAT solver framework that effectively belongs to the second class above; that is, it learns the entire solution structure from scratch.
More importantly, to train such model, we propose a training strategy that, unlike the typical Policy Gradient, is differentiable end-toend, yet it trains the model directly toward the end goal (similar to Policy Gradient).
Furthermore, our proposed training strategy enjoys an Explore-Exploit mechanism for better optimization even though it is not exactly a Reinforcement Learning approach.The other aspect of building neural models for solving combinatorial optimization problems is how the problem instance should be represented by the model.
Using classical architectures like RNNs or LSTMs completely ignores the inherent structure present in the problem instances.
For this very reason, there has been recently a strong push to employ structure-aware architectures such as different variations of neural graph embedding.
Most neural graph embedding methodologies are based on the idea of synchronously propagating local information on an underlying (undirected) graph that represents the problem structure.
The intuition behind using local information propagation for embedding comes from the fact that many original combinatorial optimization algorithms can actually be seen propagating information.
In our case, since we are dealing with Boolean circuits and circuit are Directed Acyclic Graphs (DAG), we would need an embedding architecture that take into account the special architecture of DAGs (i.e. the topological order of the nodes).
In particular, we note that in many DAG-structured problems (such as circuits, computational graphs, query DAGs, etc.), the information is propagated sequentially rather than synchronously, hence a justification to have sequential propagation for the embedding as well.
To this end, we propose a rich embedding architecture that implements such propagation mechanism for DAGs.
As we see in this paper, our proposed architecture is capable of harnessing the structural information in the input circuits.
To summarize, our contributions in this work are three-fold:(a
) We propose a general, rich graph embedding architecture that implements sequential propagation for DAG-structured data. (
b) We adapt our proposed architecture to design a neural Circuit-SAT solver which is capable of harnessing structural signals in the input circuits to learn a SAT solver.
(c) We propose a training strategy for our architecture that is end-to-end differentiable, yet similar to Reinforcement Learning techniques, it directly trains our model toward solving the SAT problem with an Explore-Exploit mechanism.The experimental results show the superior performance of our framework especially in terms of generalizing to new problem domains compared to the baseline.
In this paper, we proposed a neural framework for efficiently learning a Circuit-SAT solver.
Our methodology relies on two fundamental contributions: (1) a rich DAG-embedding architecture that implements the sequential propagation mechanism on DAG-structured data and is capable of learning useful representations for the input circuits, and (2) an efficient training procedure that trains the DAGembedding architecture directly toward solving the SAT problem without requiring SAT/UNSAT labels in general.
Our proposed training strategy is fully differentiable end-to-end and at the same time enjoys many features of Reinforcement Learning such as an Explore-Exploit mechanism and direct training toward the end goal.As our experiments showed, the proposed embedding architecture is able to harness structural information in the input DAG distribution and as a result solve the test SAT cases in a fewer number of iterations compared to the baseline.
This would also allow us to inject domain-specific heuristics into the circuit structure of the input data to obtain better models for that specific domain.
Moreover, our direct training procedure as opposed to the indirect, classification-based method in NeuroSAT enables our model to generalize better to out-of-sample test cases, as demonstrated by the experiments.
This superior generalization got even more expressed as we transferred the trained models to a complete new domain (i.e. graph coloring).
Furthermore, we argued that not only does direct training give us superior out-of-sample generalization, but it is also essential for the problem domains where we cannot enforce the strict training regime where SAT and UNSAT cases come in pairs with almost identical structures, as proposed by BID24 .Future
efforts in this direction would include closely examining the SAT solver algorithm learned by our framework to see if any high-level knowledge and insight can be extracted to further aide the classical SAT solvers. Needless
to say, this type of neural models have a long way to go in order to compete with industrial SAT solvers; nevertheless, these preliminary results are promising enough to motivate the community to pursue this direction. | We propose a neural framework that can learn to solve the Circuit Satisfiability problem from (unlabeled) circuit instances. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:696 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms.
For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem.
Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency.
A variety of other algorithms such as RAML, SPG, and data noising, have also been developed in different perspectives.
This paper establishes a formal connection between these algorithms.
We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters.
The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency.
Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning.
Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.
Sequence generation is a ubiquitous problem in many applications, such as machine translation BID28 , text summarization BID13 BID25 , image captioning BID15 , and so forth.
Great advances in these tasks have been made by the development of sequence models such as recurrent neural networks (RNNs) with different cells BID12 BID6 and attention mechanisms BID1 BID19 .
These models can be trained with a variety of learning algorithms.The standard training algorithm is based on maximum-likelihood estimation (MLE) which seeks to maximize the log-likelihood of ground-truth sequences.
Despite the computational simplicity and efficiency, MLE training suffers from the exposure bias BID24 .
That is, the model is trained to predict the next token given the previous ground-truth tokens; while at test time, since the resulting model does not have access to the ground truth, tokens generated by the model itself are instead used to make the next prediction.
This discrepancy between training and test leads to the issue that mistakes in prediction can quickly accumulate.
Recent efforts have been made to alleviate the issue, many of which resort to the reinforcement learning (RL) techniques BID24 BID2 BID8 .
For example, BID24 adopt policy gradient BID29 that avoids the training/test discrepancy by using the same decoding strategy.
However, RL-based approaches for sequence generation can face challenges of prohibitively poor sample efficiency and high variance.
For more practical training, a diverse set of methods has been developed that are in a middle ground between the two paradigms of MLE and RL.
For example, RAML adds reward-aware perturbation to the MLE data examples; SPG BID8 leverages reward distribution for effective sampling of policy gradient.
Other approaches such as data noising BID34 ) also show improved results.In this paper, we establish a unified perspective of the broad set of learning algorithms.
Specifically, we present a generalized entropy regularized policy optimization framework, and show that the apparently diverse algorithms, such as MLE, RAML, SPG, and data noising, can all be re-formulated as special instances of the framework, with the only difference being the choice of reward and the values of a couple of hyperparameters ( FIG0 ).
In particular, we show MLE is equivalent to using a delta-function reward that assigns 1 to samples that exactly match data examples while −∞ to any other samples.
Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding the exposure bias.
Other algorithms essentially use rewards that are more smooth, and also leverage model distribution for exploration, which generally results in a larger effective exploration space, more difficult training, and better test-time performance.Besides the new understandings of the existing algorithms, the unified perspective also facilitates to develop new algorithms for improved learning.
We present an example new algorithm that, as training proceeds, gradually expands the exploration space by annealing the reward and hyperparameter values.
The annealing in effect dynamically interpolates among the existing algorithms.
Experiments on machine translation and text summarization show the interpolation algorithm achieves significant improvement over the various existing methods.
We have presented a unified perspective of a variety of well-used learning algorithms for sequence generation.
The framework is based on a generalized entropy regularized policy optimization formulation, and we show these algorithms are mathematically equivalent to specifying certain hyperparameter configurations in the framework.
The new principled treatment provides systematic understanding and comparison among the algorithms, and inspires further enhancement.
The proposed interpolation algorithm shows consistent improvement in machine translation and text summarization.
We would be excited to extend the framework to other settings such as robotics and game environments.A POLICY GRADIENT & MIXER BID24 made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm BID29 .
Policy gradient aims to maximizes the expected reward: DISPLAYFORM0 where R P G is usually a common reward function (e.g., BLEU).
Taking gradient w.r.t θ gives: DISPLAYFORM1 We now reveal the relation between the ERPO framework we present and the policy gradient algorithm.Starting from the M-step of Eq.(2) and setting (α = 1, β = 0) as in SPG (section 3.4), we use p θ n as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): DISPLAYFORM2 where Z θ = y exp{log p θ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent.We can see that Eq.(12) recovers Eq.(11) if we further set R = log R P G , and omit the scaling factor Z θ . In
other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = log R P G , α = 1, β = 0) and with Z θ omitted.The MIXER algorithm BID24 incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically
, given a ground-truth example y * , the first m tokens y * 1:m are used for evaluating MLE loss, and starting from step m + 1, policy gradient objective is used. The m value
decreases as training proceeds. With the relation
between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ 1 , λ 2 , λ 3 ). That is, for t <
m in Eq.4 (i.e.,the first m steps), (λ 1 , λ 2 , λ 3 ) is set to (0, 0, 1) and c = 1, namely the MLE training; while for t > m, (λ 1 , λ 2 , λ 3 ) is set to (0.5, 0.5, 0) and c = 2. | A unified perspective of various learning algorithms for sequence generation, such as MLE, RL, RAML, data noising, etc. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:697 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme.
The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems.
It is created based on a scheme of ”Resource by Collaborative Contribution (RbCC)”.
We conducted a shared task of structuring Wikipedia, and at the same, submitted results are used to construct a knowledge base.
There are machine readable knowledge bases such as CYC, DBpedia, YAGO, Freebase Wikidata and so on, but each of them has problems to be solved.
CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers.
In order to solve the later problem, we started a project for structuring Wikipedia using automatic knowledge base construction shared-task.
The automatic knowledge base construction shared-tasks have been popular and well studied for decades.
However, these tasks are designed only to compare the performances of different systems, and to find which system ranks the best on limited test data.
The results of the participated systems are not shared and the systems may be abandoned once the task is over.
We believe this situation can be improved by the following changes:
1. designing the shared-task to construct knowledge base rather than evaluating only limited test data
2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems
3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning)
We conducted “SHINRA2018” with the above mentioned scheme and in this paper
we report the results and the future directions of the project.
The task is to extract the values of the pre-defined attributes from Wikipedia pages.
We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the 200 ENE categories.
Based on this data, the shared-task is to extract the values of the attributes from Wikipedia pages.
We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category type.
Then 100 data out of them for each category are used to evaluate the system output in the shared-task.
We conducted a preliminary ensemble learning on the outputs and found 15 F1 score improvement on a category and the average of 8 F1 score improvements on all 5 categories we tested over a strong baseline.
Based on this promising results, we decided to conduct three tasks in 2019; multi-lingual categorization task (ML), extraction for the same 5 categories in Japanese with a larger training data (JP-5) and extraction for 34 new categories in Japanese (JP-34).
Wikipedia is a great resource as a knowledge base of the entities in the world.
However, Wikipedia is created for human to read rather than machines to process.
Our goal is to transform the current Wikipedia to a machine readable format based on a clean structure.
There are several machine readable knowledge bases (KB) such as CYC BID4 , DBpedia BID3 , YAGO BID7 , Freebase BID0 , Wikidata BID13 and so on, but each of them has problems to be solved.
CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers.
In order to solve these problems, we started a project for structuring Wikipedia using automatic knowledge base construction (AKBC) shared-task using a cleaner ontology definition.The automatic knowledge base construction shared-tasks have been popular for decades.
In particular, there are popular shared-tasks in the field of Information Extraction, Knowledge Base population and attribute extraction, such as KBP[U.S. National Institute of Standards and Technology (NIST) , 2018] and CoNLL.
However, most of these tasks are designed only to compare the performances of participated systems, and to find which system ranks the best on limited test data.
The outputs of the participated systems are not shared and the results and the systems may be abandoned once the evaluation task is over.We believe this situation can be improved by the following changes:1.
designing the shared-task to construct knowledge base rather than only evaluating on limited test data
2. making the outputs of all the systems open to public so that anyone can run ensemble learning to create the better results than the best single system
3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (active learning and bootstrapping)We conducted "SHINRA2018" with the aforementioned ideas, we call it "Resource by Collaborative Contribution (RbCC)".
In this paper we report the first results and the future directions of the project.The task is to extract the values of the pre-defined attributes from Wikipedia entity pages.
We used Extended Named Entity (ENE) as the definition of the category (in total 200 categories in the ontology) and the attributes (average of 20 attributes) for each category.
We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the ENE categories prior to this project.
Based on this data, the sharedtask is to extract values of the attributes defined for the category of each entity.
At the SHINRA2018 project, we limited the target categories to 5, namely, person, company, city, airport and chemical compound.
We gave out the 600 training data each for 5 categories at and the participants are supposed to submit the attribute-values for all remaining entities of the categories in Japanese Wikipedia.
Then 100 data out of the entire pages of the category are used at the evaluation of the participated systems in the shared-task.
For example, there are about 200K person entities in Japanese Wikipedia, and the participants have to extract the attribute-values, such as "birthday", "the organizations he/she have belonged", "mentor" and "awards" from all the remaining entities (i.e. 199.4K = 200K-600 entities).
Before starting the project, the participants signed the contract that all the output will be shared among all participants, so that anyone can conduct the ensemble learning on those outputs, and hence create a better knowledge base than the best system in the task.
Note that, for the sake of participant's interest, i.e. a company may want to keep the system as their property, the outputs are required to be shared, but their systems are not necessarily to be shared.
A promising results of the ensemble learning is achieved and we envision that it will lead to the cleaner machine readable knowledge base construction.
We proposed a scheme of knowledge base creation: "Resource by Collaborative Contribution".
We conducted the Japanese Wikipedia structuring project, SHINRA2018, based on that scheme.
Based on Extended Named Entity, the top-down definition of categories and attributed for named entities, the task is to extract the attribute-values from Japanese Wikipedia pages.
8 groups participated to the task, and the ensemble learning results shows that the RbCC scheme is practical and promising.
A quite big improvement over the the best single system was achieved on "airport" category (more than 15 F-score), and the average of 8 F-score improvement was achieved using the weighted voting methods.
We are planning to conduct SHINRA2019 based on the RbCC scheme on 3 tasks.
These are the multi-lingual categorization, the extraction of attribute-value on the same 5 categories, and the extraction of attribute-values on 30 new categories in Japanese.We'd like to express our deep appreciation to all the participants and collaborators who helped this project.
Without the participation, we couldn't even try the ensemble learning and achieve the goal.
We are hoping to expand and spread the idea of RbCC scheme, not only limited to this kind of task and resource. | We introduce a "Resource by Collaborative Construction" scheme to create KB, structured Wikipedia | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:698 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent image super-resolution(SR) studies leverage very deep convolutional neural networks and the rich hierarchical features they offered, which leads to better reconstruction performance than conventional methods.
However, the small receptive fields in the up-sampling and reconstruction process of those models stop them to take full advantage of global contextual information.
This causes problems for further performance improvement.
In this paper, inspired by image reconstruction principles of human visual system, we propose an image super-resolution global reasoning network (SRGRN) to effectively learn the correlations between different regions of an image, through global reasoning.
Specifically, we propose global reasoning up-sampling module (GRUM) and global reasoning reconstruction block (GRRB).
They construct a graph model to perform relation reasoning on regions of low resolution (LR) images.They aim to reason the interactions between different regions in the up-sampling and reconstruction process and thus leverage more contextual information to generate accurate details.
Our proposed SRGRN are more robust and can handle low resolution images that are corrupted by multiple types of degradation.
Extensive experiments on different benchmark data-sets show that our model outperforms other state-of-the-art methods.
Also our model is lightweight and consumes less computing power, which makes it very suitable for real life deployment.
Image Super-Resolution (SR) aims to reconstruct an accurate high-resolution (HR) image given its low-resolution (LR) counterpart.
It is a typical ill-posed problem, since the LR to HR mapping is highly uncertain.
In order to solve this problem, a large number of methods have been proposed, including interpolation-based (Zhang & Wu., 2006) , reconstruction-based (Zhang et al., 2012) , and learning-based methods (Timofte et al., 2013; Peleg & Elad., 2014; Schulter et al., 2015; Huang et al., 2015; Tai et al., 2017; Tong et al., 2017; Zhang et al., 2018a; Dong et al., 2016) .
In recent years, deep learning based methods have achieved outstanding performance in superresolution reconstruction.
Some effective residual or dense blocks Zhang et al., 2018b; Lim et al., 2017; Ledig et al., 2017; Ahn et al.; Li et al., 2018) have been proposed to make the network wider and deeper and achieved better results.
However, they only pay close attention to improving the feature extraction module, ignoring that the upsampling process with smaller receptive fields does not make full use of those extracted features.
Small convolution receptive field means that the upsampling process can only perform super-resolution reconstruction based on local feature relationships in LR.
As we all know, different features interact with each other, and features which are in different regions have corresponding effects on upsampling and reconstruction of a certain region.
That is to say that a lot of information is lost in the process of upsampling and reconstruction due to the limitation of the receptive field, although the network extracts a large number of hierarchical features which are from low frequency to high frequency.
Chariker et al. (2016; show that the brain generates the images we see based on a small amount of information observed by the human eye, ranther than acquiring the complete data from the point-by-point scan of the retina.
This process of generating an image is similar to a SR process.
According to their thought, we add global information in SR reconstruction and propose to use relational reasoning to implement the process that the human visual system reconstructs images with observed global information.
In general, extracting global information requires a large receptive field.
A large convolution receptive field usually requires stacking a large number of convolutional layers, but this method does not work in the upsampling and reconstruction process.
Because this will produce a huge number of parameters.
Based on the above analysis, we propose an image super-resolution global reasoning network (SR-GRN) which introduces the global reasoning mechanism to the upsampling module and the reconstruction layer.
The model can capture the relationship between disjoint features of the image with a small respective field, thereby fully exploits global information as a reference for upsampling and reconstruction.
We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core structure of the network.
GRUM and GRRB first convert the LR feature map into N nodes, each of which not only represents a feature region in the LR image, but also contains the influence of pixels in other regions on this feature.
Then they learn the relationship between the nodes and fuse the information of each node in a global scope.
After that, GRUM learns the relationship between the channels in each node and amplifies the number of channels for the upsampling process.
And then they convert N nodes into pixels with global reasoning information.
Finally, GRUM and GRRB complete the upsampling and reconstruction process respectively.
In general, our work mainly has the following three contributions:
• We propose an image super-resolution global reasoning network (SRGRN) which draws on the idea of image reconstruction principles of human visual system.
We mainly focus on the upsampling module and the reconstruction module.
The model reconstructs SR images based on relational reasoning in a global scope.
• We propose a global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB), which construct a graph model to implement the relational reasoning among the feature regions in an image via 1D and 2D convolution, and finally adds the information obtained by global reasoning to each pixel.
It can provide more contextual information to help generate more accurate details.
• Our proposed GRUM and GRRB are lightweight, which makes it suitable for real life deployment.
More importantly, GRUM and GRRB balance the number of parameters and the reconstruction performance well.
They can be easily inserted into other models.
In this paper, inspired by the process of reconstructing images from the human visual system, we propose an super-resolution global reasoning network (SRGRN) for image SR, which aims at completing the reconstruction of SR images through global reasoning.
We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core of the network.
The GRUM can give the upsampling module the ability to perform relational reasoning in a global scope, which allows this process to overcome the limitations of the receptive field and recover more faithful details by analyzing more contextual information.
The GRRB also enables the reconstruction block to make full use of the interaction between the regions and pixels to reconstruct SR images.
We exploit SRGRN not only to handle low resolution images that are corrupted by three degradation model, but also to handle real-world images.
Extensive benchmark evaluations demonstrate the importance of GRUM and GRRB.
It also indicates that our SRGRN achieves superiority over state-of-the-art methods through global reasoning. | A state-of-the-art model based on global reasoning for image super-resolution | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:699 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist.
Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?
In this work, we study this question and propose gradient rescaling (GR) to solve it.
GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.
Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.
We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.
It is also significantly superior to standard regularisers in both clean and abnormal settings.
Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.
DNNs have been successfully applied in diverse applications (Socher et al., 2011; Krizhevsky et al., 2012; LeCun et al., 2015) .
However, their success is heavily reliant on the quality of training data, especially accurate semantic labels for learning supervision.
Unfortunately, on the one hand, maintaining the quality of semantic labels as the scale of training data increases is expensive and almost impossible when the scale becomes excessively large.
On the other hand, it has been demonstrated that DNNs are capable of memorising the whole training data even when all training labels are random (Zhang et al., 2017) .
Therefore, DNNs struggle to discern meaningful data patterns and ignore semantically abnormal examples 1 simultaneously (Krueger et al., 2017; Arpit et al., 2017) .
Consequently, it becomes an inevitable demand for DNNs to hold robustness when training data contains anomalies (Larsen et al., 1998; Natarajan et al., 2013; Sukhbaatar & Fergus, 2014; Xiao et al., 2015; Patrini et al., 2017; Vahdat, 2017; Veit et al., 2017; Li et al., 2017) .
Recently, great progress has been made towards robustness against anomalies when training DNNs (Krueger et al., 2017) .
There are three appealing perspectives in terms of their simplicity and effectiveness:
1) Examples weighting.
For example, knowledge distilling from auxiliary models is popular for heuristically designing weighting schemes.
However, it is challenging to select and train reliable auxiliary models in practice (Li et al., 2017; Malach & Shalev-Shwartz, 2017; Jiang et al., 2018; Ren et al., 2018; Han et al., 2018b) .
2) Robust loss functions (Van Rooyen et al., 2015; Ghosh et al., 2017; Zhang & Sabuncu, 2018; Wang et al., 2019b) ; 3) Explicit regularisation techniques (Arpit et al., 2017; .
Although designing robust losses or explicit regularisation is easier and more flexible in practice, the performance is not the optimal yet.
1 One training example is composed of an input and its corresponding label.
A semantically abnormal example means the input is semantically unrelated to its label, which may come from corrupted input or label.
For example, in Figure 3 in the supplementary material:
1) Out-of-distribution anomalies: An image may contain only background or an object which does not belong to any training class;
2) In-distribution anomalies: An image of class a may be annotated to class b or an image may contain more than one semantic object.
Regarding examples weighting, there is a core research question which is not well answered yet:
What training examples should be focused on and how large the emphasis spread should be?
In this work, we present a thorough study of this practical question under different settings.
For better analysis, we propose two basic and necessary concepts: emphasis focus and spread with explicit definition in Sec. 3.2.
They are conceptually introduced as follows:
Emphasis focus.
It is a common practice to focus on harder instances when training DNNs (Shrivastava et al., 2016; Lin et al., 2017) .
When a dataset is clean, it achieves faster convergence and better performance to emphasise on harder examples because they own larger gradient magnitude, which means more information and a larger update step for model's parameters.
However, when severe noise exists, as demonstrated in (Krueger et al., 2017; Arpit et al., 2017) , DNNs learn simple meaningful patterns first before memorising abnormal ones.
In other words, anomalies are harder to fit and own larger gradient magnitude in the later stage.
Consequently, if we use the default sample weighting in categorical cross entropy (CCE) where harder samples obtain higher weights, anomalies tend to be fitted well especially when a network has large enough capacity.
That is why we need to move the emphasis focus towards relatively easier ones, which serves as emphasis regularisation.
Emphasis spread.
We term the weighting variance of training examples emphasis spread.
The key concept is that we should not treat all examples equally, neither should we let only a few be emphasised and contribute to the training.
Therefore, when emphasis focus changes, the emphasis spread should be adjusted accordingly.
We integrate emphasis focus and spread into a unified example weighting framework.
Emphasis focus defines what training examples own higher weights while emphasis spread indicates how large variance over their weights.
Specifically, we propose gradient rescaling (GR), which modifies the magnitude of logit vector's gradient.
The logit vector is the output of the last fully connected (FC) layer of a network.
We remark that we do not design the weighting scheme heuristically from scratch.
Instead, it is naturally motivated by the gradient analysis of several loss functions.
Interestingly, GR can be naturally connected to examples weighting, robust losses, explicit regularisation:
1) The gradient magnitude of logit vector can be regarded as weight assignment that is built-in in loss functions (Gopal, 2016; Alain et al., 2016; Zhang et al., 2018b) .
Therefore, rescaling the gradient magnitude equals to adjusting the weights of examples;
2) A specific loss function owns a fixed gradient derivation.
Adjusting the gradient can be treated as a more direct and flexible way of modifying optimisation objectives;
3) Instead of focusing on harder examples 2 by default, we can adjust emphasis focus to relative easier ones when noise is severe.
GR serves as emphasis regularisation and is different from standard regularisers, e.g., L2 weight decay constraints on weight parameters and Dropout samples neural units randomly (Srivastava et al., 2014) ; GR is simple yet effective.
We demonstrate its effectiveness on diverse computer vision tasks using different net architectures:
1) Image classification with clean training data;
2) Image classification with synthetic symmetric label noise, which is more challenging than asymmetric noise evaluated by (Vahdat, 2017; ; 3) Image classification with real-world unknown anomalies, which may contain open-set noise , e.g., images with only background, or outliers, etc.
;
4) Video person re-identification, a video retrieval task containing diverse anomalies.
Beyond, we show that GR is notably better than other standard regularisers, e.g., L2 weight decay and dropout.
Besides, to comprehensively understand GR's behaviours, we present extensive ablation studies.
Main contribution.
Intuitively and principally, we claim that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.
To the best of our knowledge, we are the first to thoroughly study and analyse them together in a unified framework.
In this work, we present three main contributions:
1) We analyse and answer a core research question: What training examples should be focused on and how large the emphasis spread should be?
2) We uncover and analyse that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting.
Consequently, we propose a simple yet effective gradient rescaling framework serving as emphasis regularisation.
3) Extensive experiments on different tasks using different network architectures are reported for better understanding and demonstration of GR's effectiveness, which are also valuable for applying GR in practice.
(Zheng et al., 2016) .
Out-of-distribution anomalies:
1) The first image in the 3rd row contains only background and no semantic information at all. | ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:7 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.
The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches.
Combining these blocks, we design significantly wider and deeper architectures.
We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN).
We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%.
Geometry processing has recently started profiting from applying deep learning to graphics and 3D shape analysis (Qi et al., 2017b; Wang et al., 2018b; with networks that guarantee desirable properties of point cloud processing, such as permutation-invariance and quantization-free representation Wang et al., 2017; .
Despite these advances, several differences still impede the breakthroughs made in computer vision.
The different nature of 3D data dictates re-inventing for geometry processing the functionality of basic image processing blocks, such as multi-resolution processing or convolution operations.
When operating with unstructured point clouds, one has to resort to elementary local pooling operations that group information within a neighborhood based on Euclidean distance.
Exemplary methods such as the PointNet/PointNet++ architectures (Qi et al., 2017a; make design choices that potentially compromise performance.
In particular, the computation and memory demands of point network blocks can affect both training speed and, more crucially, inference time.
One of the main bottlenecks for point networks is their memory-intensive nature: as detailed in Sec. 3.1, the PointNet++ architecture and its variants replicate point neighborhood information, letting every node carry in its feature vector information about all of its neighborhood.
This results in significant memory overhead, and limits the number of layers, features and feature compositions one can compute.
In this work, we enhance point processing networks by introducing a set of modules that improve memory footprint and accuracy, without compromising on inference speed.
We call the result architectures Lean Point Networks, to highlight their lightweight memory budget.
We build on the decreased memory budget to go deeper with point networks.
As has been witnessed repeatedly in the image domain Huang et al., 2016; Zagoruyko & Komodakis, 2016) , we show that going deep also increases the prediction accuracy of point networks.
We start in Sec. 3.2 by replacing the grouping operation used in point cloud processing networks with a low-memory alternative that is the point cloud processing counterpart of efficient image processing implementations of convolution.
The resulting 'point convolution block' is 67% more memory-efficient and 41% faster than its PointNet++ counterpart, while exhibiting favorable training properties due to more effective mixing of information across neighborhoods.
We then turn in Sec. 3.3 to improving the information flow across layers and scales within point networks through three techniques: a multi-resolution variant for multi-scale network which still delivers the multi-scale context but at a reduced memory and computational cost, residual links, and a new cross-link block that broadcasts multi-scale information across the network branches.
By combining these advances we are able to successfully train deeper point networks that allow us to leverage upon larger, recently introduced datasets.
In Sec. 4 we thoroughly validate our contributions on the ShapeNet-Part, ScanNet and PartNet segmentation benchmarks, reporting systematic improvements over the PointNet++ baseline.
As shown in Fig. 1 , when combined these contributions deliver multifold reductions in memory consumption while improving performance, allowing us in a second stage to train increasingly wide and deep networks.
On PartNet, the most complex dataset, our deep architecture achieves a 9.7% relative increase in IoU while decreasing memory footprint by 57% and inference time by 47%.
Having thoroughly ablated our design choices on the PartNet++ baseline, in Sec. 4.3 we turn to confirming the generic nature of our blocks.
We extend the scope of our experiments to three additional networks,
(i) DGCNN (Wang et al., 2018b) ,
(ii) SpiderCNN (Xu et al., 2018) and
(iii) PointCNN (Li et al., 2018b) and report systematic improvements in memory efficiency and performance.
In this work we have introduced new generic building blocks for point processing networks, that exhibit favorable memory, computation, and optimization properties when compared to the current counterparts of state-of-the-art point processing networks.
When based on PointNet++, our lean architecture convPN wins on all counts, memory efficiency (-67% wrt. PointNet++) and speed (-41% and -68% on inference time and length of backward pass).
Its deep counterpart has a marginal cost in terms of efficiency and achieves the best IoU on PartNet (+9.7% over PointNet++).
Those generic and modular blocks exhibit similar performance on all of the additional tested architectures with a significant decrease in memory (up to -69%) and increase in IoU (up to +8.0%).
From the promising results on PartNet and the extremely low cost of depth in our architectures, we anticipate that adding these components to the armament of the deep geometry processing community will allow researchers to train the next generation of point processing networks by leveraging upon the advent of larger shape datasets (Mo et al., 2018; Koch et al., 2018 | We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:70 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Graph Neural Networks (GNNs) have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains.
The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed.
However, there still lacks in-depth analysis on (1) Whether there exists a best filter that can perform best on all graph data; (2) Which graph properties will influence the optimal choice of graph filter; (3) How to design appropriate filter adaptive to the graph data.
In this paper, we focus on addressing the above three questions.
We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph.
Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs.
In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice.
Based on these findings, we develop Adaptive Filter Graph Neural Network (AFGNN), a simple but powerful model that can adaptively learn task-specific filter.
For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters.
Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks.
Graph Neural Networks (GNNs) are a family of powerful tools for representation learning on graph data, which has been drawing more and more attention over the past several years.
GNNs can obtain informative node representations for a graph of arbitrary size and attributes, and has shown great effectiveness in graph-related down-stream applications, such as node classification (Kipf & Welling, 2017) , graph classification (Wu et al., 2019b) , graph matching (Bai et al., 2019) , recommendation systems (Ying et al., 2018) , and knowledge graphs (Schlichtkrull et al., 2018) .
As GNNs have superior performance in graph-related tasks, the question as to what makes GNNs so powerful is naturally raised.
Note that GNNs adopt the concept of the convolution operation into graph domain.
To obtain a representation of a specific node in a GNN, the node aggregates representations of its neighbors with a convolutional filter.
For a task related to graph topology, the convolutional filter can help GNN nodes to get better task-specific representations (Xu et al., 2019) .
Therefore, it is the filter that makes GNNs powerful, and thus the key to designing robust and accurate GNNs is to design proper graph convolutional filters.
Recently, many GNN architectures are proposed (Zhou et al., 2018) with their own graph filter designs.
However, none of them have properly answered the following fundamental questions of GNNs: (1) Is there a best filter that works for all graphs?
(2) If not, what are the properties of graph structure that will influence the performance of graph convolutional filters?
(3) Can we design an algorithm to adaptively find the appropriate filter for a given graph?
In this paper, we focus on addressing the above three questions for semi-supervised node classification task.
Inspired by studies in Linear Discriminant Analysis (LDA), we propose a Graph Filter Discriminant (GFD) Score metric to measure the power of a graph convolutional filter in discriminating node representations of different classes on a specific graph.
We have analyzed all the existing GNNs' filters with this assessment method to answer the three aforementioned questions.
We found that no single filter design can achieve optimal results on all possible graphs.
In other words, for different graph data, we should adopt different graph convolutional filters to achieve optimal performance.
We then experimentally and theoretically analyze how graph structure properties influence the optimal choice of graph convolutional filters.
Based on all of our findings, we propose the Adaptive Filter Graph Neural Network (AF-GNN), which can adaptively learn a proper model for the given graph.
We use the Graph Filter Discriminant Score (GFD) as a an extra loss term to guide the network to learn a good data-specific filter, which is a linear combination of a set of base filters.
We show that the proposed Adaptive Filter can better capture graph topology and separate features on both real-world datasets and synthetic datasets.
We highlight our main contributions as follows:
• We propose an assessment tool: Graph Filter Discriminant Score, to analyze the effectiveness of graph convolutional filters.
Using this tool, we find that no best filter can work for all graphs, the optimal choice of a graph convolutional filter depends on the graph data.
• We propose Adaptive Filter Graph Neural Network that can adaptively learn a proper filter for a specific graph using the GFD Score as guidance.
• We show that the proposed model can find better filters and achieve better performance compared to existing GNNs, on both real-word and newly created benchmark datasets.
Understanding the graph convolutional filters in GNNs is very important, as it can help to determine whether a GNN will work on a given graph, and can provide important guidance for GNN design.
In our paper, we focus on the semi-supervised node classification task.
We first propose the Graph Filter Discriminant Score as an assessment tool for graph convolutional filter evaluation, and then apply this GFD Score to analyze a family of existing filters as a case study.
Using this tool, we learn that no single fixed filter can produce optimal results on all graphs.
We then develop a simple but powerful GNN model: Adapative Filter Graph Neural Network, which can learn to combine a family of filters and obtain a task-specific powerful filter.
We also propose to add the negative GFD Score as an extra component to the objective function, it can act as a guidance for the model to learn a more effective filter.
Experiments show that our approach outperforms many existing GNNs on both benchmark and synthetic graphs. | Propose an assessment framework to analyze and learn graph convolutional filter | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:700 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The advance of node pooling operations in Graph Neural Networks (GNNs) has lagged behind the feverish design of new message-passing techniques, and pooling remains an important and challenging endeavor for the design of deep architectures.
In this paper, we propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCut optimization objective.
For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e.g., a graph classification loss), and, thanks to the minCut objective, also on the connectivity structure of the graph.
Graph pooling is obtained by applying the matrix of assignment vectors to the adjacency matrix and the node features.
We validate the effectiveness of the proposed pooling method on a variety of supervised and unsupervised tasks.
A fundamental component in deep convolutional neural networks is the pooling operation, which replaces the output of convolutions with local summaries of nearby points and is usually implemented by maximum or average operations (Lee et al., 2016) .
State-of-the-art architectures alternate convolutions, which extrapolate local patterns irrespective of the specific location on the input signal, and pooling, which lets the ensuing convolutions capture aggregated patterns.
Pooling allows to learn abstract representations in deeper layers of the network by discarding information that is superfluous for the task, and keeps model complexity under control by limiting the growth of intermediate features.
Graph Neural Networks (GNNs) extend the convolution operation from regular domains, such as images or time series, to data with arbitrary topologies and unordered structures described by graphs (Battaglia et al., 2018) .
The development of pooling strategies for GNNs, however, has lagged behind the design of newer and more effective message-passing (MP) operations (Gilmer et al., 2017) , such as graph convolutions, mainly due to the difficulty of defining an aggregated version of the original graph that supports the pooled signal.
A naïve pooling strategy in GNNs is to average all nodes features (Li et al., 2016) , but it has limited flexibility since it does not extract local summaries of the graph structure, and no further MP operations can be applied afterwards.
An alternative approach consists in pre-computing coarsened versions of the original graph and then fit the data to these deterministic structures (Bruna et al., 2013) .
While this aggregation accounts for the connectivity of the graph, it ignores task-specific objectives as well as the node features.
In this paper, we propose a differentiable pooling operation implemented as a neural network layer, which can be seamlessly combined with other MP layers (see Fig. 1 ).
The parameters in the pooling layer are learned by combining the task-specific loss with an unsupervised regularization term, which optimizes a continuous relaxation of the normalized minCUT objective.
The minCUT identifies dense graph components, where the nodes features become locally homogeneous after the message-passing.
By gradually aggregating these components, the GNN learns to distil global properties from the graph.
The proposed minCUT pooling operator (minCUTpool) yields partitions that
1) cluster together nodes which have similar features and are strongly connected on the graph, and
2) take into account the objective of the downstream task.
The proposed method is straightforward to implement: the cluster assignments, the loss, graph coarsening, and feature pooling are all computed with standard linear algebra operations.
There are several differences between minCUTpool and classic SC methods.
SC partitions the graph based on the Laplacian, but does not account for the node features.
Instead, the cluster assignments s i found by minCUTpool depend on x i , which works well if connected nodes have similar features.
This is a reasonable assumption in GNNs since, even in disassortative graphs (i.e., networks where dissimilar nodes are likely to be connected (Newman, 2003) ), the features tend to become similar due to the MP operations.
Another difference is that SC handles a single graph and is not conceived for tasks with multiple graphs to be partitioned independently.
Instead, thanks to the independence of the model parameters from the number of nodes N and from the graph spectrum, minCUTpool can generalize to outof-sample data.
This feature is fundamental in problems such as graph classification, where each sample is a graph with a different structure, and allows to train the model on small graphs and process larger ones at inference time.
Finally, minCUTpool directly uses the soft cluster assignments rather than performing k-means afterwards.
We proposed a pooling layer for GNNs that coarsens a graph by taking into account both the the connectivity structure and the node features.
The layer optimizes a regularization term based on the minCUT objective, which is minimized in conjunction with the task-specific loss to produce node partitions that are optimal for the task at hand.
We tested the effectiveness of our pooling strategy on unsupervised node clustering tasks, by optimizing only the unsupervised clustering loss, as well as supervised graph classification tasks on several popular benchmark datasets.
Results show that minCUTpool performs significantly better than existing pooling strategies for GNNs. | A new pooling layer for GNNs that learns how to pool nodes, according to their features, the graph connectivity, and the dowstream task objective. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:701 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Knowledge graphs are structured representations of real world facts.
However, they typically contain only a small subset of all possible facts.
Link prediction is the task of inferring missing facts based on existing ones.
We propose TuckER, a relatively simple yet powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples.
By using this particular decomposition, parameters are shared between relations, enabling multi-task learning.
TuckER outperforms previous state-of-the-art models across several standard link prediction datasets.
Vast amounts of information available in the world can be represented succinctly as entities and relations between them.
Knowledge graphs are large, graph-structured databases which store facts in triple form (e s , r, e o ), with e s and e o representing subject and object entities and r a relation.
However, far from all available information is stored in existing knowledge graphs, which creates the need for algorithms that automatically infer missing facts.
Knowledge graphs can be represented by a third-order binary tensor, where each element corresponds to a triple, 1 indicating a true fact and 0 indicating the unknown (either a false or a missing fact).
The task of link prediction is to infer which of the 0 entries in the tensor are indeed false, and which are missing but actually true.A large number of approaches to link prediction so far have been linear, based on various methods of factorizing the third-order binary tensor BID12 BID22 BID19 BID7 .
Recently, state-of-the-art results have been achieved using non-linear convolutional models BID3 BID0 .
Despite achieving very good per- formance, the fundamental problem with deep, non-linear models is that they are non-transparent and poorly understood, as opposed to more mathematically principled and widely studied tensor decomposition models.In this paper, we introduce TuckER (E stands for entities, R for relations), a simple linear model for link prediction in knowledge graphs, based on Tucker decomposition BID21 of the binary tensor of triples.
Tucker decomposition factorizes a tensor into a core tensor multiplied by a matrix along each mode.
In our case, rows of the matrices contain entity and relation embeddings, while entries of the core tensor determine the level of interaction between them.
Due to having the core tensor, unlike simpler models, such as RESCAL, DistMult and ComplEx, where parameters for each relation are often learned separately, TuckER makes use of multi-task learning between different relations BID24 .
Subject and object entity embedding matrices are assumed equivalent, i.e. we make no distinction between the embeddings of an entity depending on whether it appears as a subject or as an object in a particular triple.
Our experiments show that TuckER achieves state-of-the-art results across all standard link prediction datasets.
In this work, we introduce TuckER, a relatively simple yet highly flexible linear model for link prediction in knowledge graphs based on the Tucker decomposition of a binary tensor of training set triples, which achieves state-of-the-art results on several standard link prediction datasets.
TuckER's number of parameters grows linearly with respect to embedding dimension as the number of entities or relations in a knowledge graph increases, which makes it easily scalable to large knowledge graphs.
Future work might include exploring how to incorporate background knowledge on individual relation properties into the existing model. | We propose TuckER, a relatively simple but powerful linear model for link prediction in knowledge graphs, based on Tucker decomposition of the binary tensor representation of knowledge graph triples. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:702 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
With innovations in architecture design, deeper and wider neural network models deliver improved performance on a diverse variety of tasks.
But the increased memory footprint of these models presents a challenge during training, when all intermediate layer activations need to be stored for back-propagation.
Limited GPU memory forces practitioners to make sub-optimal choices: either train inefficiently with smaller batches of examples; or limit the architecture to have lower depth and width, and fewer layers at higher spatial resolutions.
This work introduces an approximation strategy that significantly reduces a network's memory footprint during training, but has negligible effect on training performance and computational expense.
During the forward pass, we replace activations with lower-precision approximations immediately after they have been used by subsequent layers, thus freeing up memory.
The approximate activations are then used during the backward pass.
This approach limits the accumulation of errors across the forward and backward pass---because the forward computation across the network still happens at full precision, and the approximation has a limited effect when computing gradients to a layer's input.
Experiments, on CIFAR and ImageNet, show that using our approach with 8- and even 4-bit fixed-point approximations of 32-bit floating-point activations has only a minor effect on training and validation performance, while affording significant savings in memory usage.
Deeper neural network models are able to express more complex functions, and recent results have shown that with the use of residual BID7 and skip BID9 connections to address vanishing gradients, such networks can be trained effectively to leverage this additional capacity.
As a result, the use of deeper network architectures has become prevalent, especially for visual inference tasks BID8 .
The shift to larger architectures has delivered significant improvements in performance, but also increased demand on computational resources.
In particular, deeper network architectures require significantly more on-device memory during training-much more so than for inference.
This is because training requires retaining the computed activations of all intermediate layers since they are needed to compute gradients during the backward pass.The increased memory footprint means fewer training samples can fit in memory and be processed as a batch on a single GPU.
This is inefficient: smaller batches are not able to saturate all available parallel cores, especially because computation in "deeper" architectures is distributed to be more sequential.
Moreover, smaller batches also complicate the use of batch-normalization BID11 , since batch statistics are now computed over fewer samples making training less stable.
These considerations often force the choice of architecture to be based not just on optimality for inference, but also practical feasibility for training-for instance, deep residual networks for large images drop resolution early, so that most layers have smaller sized outputs.While prior work to address this has traded-off memory for computation BID13 BID4 BID3 , their focus has been on enabling exact gradient computation.
However, since stochastic gradient descent (SGD) inherently works with noisy gradients at each iteration, we propose an algorithm that computes reasonably approximate gradients, while significantly reducing a network's memory footprint and with virtually no additional computational cost.
Our work is motivated by distributed training algorithms Figure 1 : Proposed Approach.
We show the computations involved in the forward and backward pass during network training for a single "pre-activation" layer, with possible residual connections.
The forward pass is exact, but we discard full-precision activations right after use by subsequent layers (we store these in common global buffers, and overwrite activations once they have been used and no longer needed for forward computation).
Instead, we store a low-precision approximation of the activations which occupies less memory, and use these during back-propagation.
Our approach limits errors in the gradient flowing back to the input of a layer, and thus accumulation of errors across layers.
Since our approximation preserves the signs of activations, most of the computations along the path back to the input are exact-with the only source of error being the use of the approximate activations while back-propagating through the variance-computation in batch-normalization.that succeed despite working with approximate and noisy gradients aggregated across multiple devices BID15 BID2 Seide et al., 2014; Wen et al., 2017) .
We propose using low-precision approximate activations-that require less memory-to compute approximate gradients during back-propagation (backprop) on a single device.
Note that training with a lowerprecision of 16-instead of 32-bit floating-point representations is not un-common.
But this lower precision is used for all computation, and thus allows only for a modest lowering of precision, since the approximation error builds up across the forward and then backward pass through all layers.In this work, we propose a new backprop implementation that performs the forward pass through the network at full-precision, and incurs limited approximation error during the backward pass.
We use the full-precision version of a layer's activations to compute the activations of subsequent layers.
However, once these activations have been used in the forward pass, our method discards them and stores a low-precision approximation instead.
During the backward pass, gradients are propagated back through all the layers at full precision, but instead of using the original activations, we use their low-precision approximations.
As a result, we incur an approximation error at each layer when computing the gradients to the weights from multiplying the incoming gradient with the approximate activations, but ensure the error in gradients going back to the previous layer is minimal.Our experimental results show that even using only 4-bit fixed-point approximations, for the original 32-bit floating-point activations, causes only minor degradation in training quality.
This significantly lowers the memory required for training, which comes essentially for "free"-incurring only the negligible additional computational cost of converting activations to and from low precision representations.
Our memory-efficient version of backprop is thus able to use larger batch sizes at each iteration-to fully use available parallelism and compute stable batch statistics-and makes it practical for researchers to explore the use of much larger and deeper architectures than before.
We introduced a new algorithm for approximate gradient computation in neural network training, that significantly reduces the amount of required on-device memory.
Our experiments show that this comes at a minimal cost in terms of both quality of the learned models, and computational expense.
With a lower memory footprint, our method allows training with larger batches in each iterationimproving efficiency and stability-and exploration of deeper architectures that were previously impractical to train.
We will release our reference implementation on publication.Our method shows that SGD is reasonably robust to working with approximate activations.
While we used an extremely simple approximation strategy-uniform quantization-in this work, we are interested in exploring whether more sophisticated techniques-e.g., based on random projections or vector quantization-can provide better trade-offs, especially if informed by statistics of gradients and errors from prior iterations.
We are also interested in investigating whether our approach to partial approximation can be utilized in other settings, especially to reduce inter-device communication for distributed training with data or model parallelism. | An algorithm to reduce the amount of memory required for training deep networks, based on an approximation strategy. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:703 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning.
The problem is typically addressed using streaming algorithms which can process very large data using limited storage.
Today's streaming algorithms, however, cannot exploit patterns in their input to improve performance.
We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates.
The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory.
We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts.
We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains.
Classical algorithms provide formal guarantees over their performance, but often fail to leverage useful patterns in their input data to improve their output.
On the other hand, deep learning models are highly successful at capturing and utilizing complex data patterns, but often lack formal error bounds.
The last few years have witnessed a growing effort to bridge this gap and introduce algorithms that can adapt to data properties while delivering worst case guarantees.
Deep learning modules have been integrated into the design of Bloom filters (Kraska et al., 2018; BID18 , caching algorithms (Lykouris & Vassilvitskii, 2018) , graph optimization BID12 , similarity search BID22 BID29 ) and compressive sensing BID3 .
This paper makes a significant step toward this vision by introducing frequency estimation streaming algorithms that automatically learn to leverage the properties of the input data.Estimating the frequencies of elements in a data stream is one of the most fundamental subroutines in data analysis.
It has applications in many areas of machine learning, including feature selection BID0 , ranking (Dzogang et al., 2015) , semi-supervised learning BID27 and natural language processing (Goyal et al., 2012) .
It has been also used for network measurements (Estan & Varghese, 2003; BID30 BID28 and security BID23 .
Frequency estimation algorithms have been implemented in popular data processing libraries, such as Algebird at Twitter BID4 .
They can answer practical questions like: what are the most searched words on the Internet?
or how much traffic is sent between any two machines in a network?The
frequency estimation problem is formalized as follows: given a sequence S of elements from some universe U , for any element i ∈ U , estimate f i , the number of times i occurs in S. If one could store all arrivals from the stream S, one could sort the elements and compute their frequencies. However
, in big data applications, the stream is too large (and may be infinite) and cannot be stored. This challenge
has motivated the development of streaming algorithms, which read the elements of S in a single pass and compute a good estimate of the frequencies using a limited amount of space.1 Over the last
two decades, many such streaming algorithms have been developed, including Count-Sketch BID7 , Count-Min BID11 ) and multistage filters (Estan & Varghese, 2003) . The performance
guarantees of these algorithms are wellunderstood, with upper and lower bounds matching up to O(·) factors (Jowhari et al., 2011) .However, such streaming
algorithms typically assume generic data and do not leverage useful patterns or properties of their input. For example, in text data
, the word frequency is known to be inversely correlated with the length of the word. Analogously, in network
data, certain applications tend to generate more traffic than others. If such properties can
be harnessed, one could design frequency estimation algorithms that are much more efficient than the existing ones. Yet, it is important to
do so in a general framework that can harness various useful properties, instead of using handcrafted methods specific to a particular pattern or structure (e.g., word length, application type).In this paper, we introduce
learning-based frequency estimation streaming algorithms. Our algorithms are equipped
with a learning model that enables them to exploit data properties without being specific to a particular pattern or knowing the useful property a priori. We further provide theoretical
analysis of the guarantees associated with such learning-based algorithms.We focus on the important class of "hashing-based" algorithms, which includes some of the most used algorithms such as Count-Min, Count-Median and Count-Sketch. Informally, these algorithms hash
data items into B buckets, count the number of items hashed into each bucket, and use the bucket value as an estimate of item frequency. The process can be repeated using
multiple hash functions to improve accuracy. Hashing-based algorithms have several
useful properties. In particular, they can handle item deletions
, which are implemented by decrementing the respective counters. Furthermore, some of them (notably Count-Min
) never underestimate the true frequencies, i.e., f i ≥ f i holds always. However, hashing algorithms lead to estimation
errors due to collisions: when two elements are mapped to the same bucket, they affect each others' estimates. Although collisions are unavoidable given the
space constraints, the overall error significantly depends on the pattern of collisions. For example, collisions between high-frequency
elements ("heavy hitters") result in a large estimation error, and ideally should be minimized. The existing algorithms, however, use random hash
functions, which means that collisions are controlled only probabilistically.Our idea is to use a small subset of S, call it S , to learn the heavy hitters. We can then assign heavy hitters their own buckets
to avoid the more costly collisions. It is important to emphasize that we are learning
the properties that identify heavy hitters as opposed to the identities of the heavy hitters themselves. For example, in the word frequency case, shorter
words tend to be more popular. The subset S itself may miss many of the popular
words, but whichever words popular in S are likely to be short. Our objective is not to learn the identity of high
frequency words using S . Rather, we hope that a learning model trained on S
learns that short words are more frequent, so that it can identify popular words even if they did not appear in S .Our main contributions are as follows:• We introduce
learning-based frequency estimation streaming algorithms, which learn the properties of heavy hitters in their input and exploit this information to reduce errors• We provide performance guarantees showing that our algorithms can deliver a logarithmic factor improvement in the error bound over their non-learning counterparts. Furthermore, we show that our learning-based instantiation
of Count-Min, a widely used algorithm, is asymptotically optimal among all instantiations of that algorithm. See Table 4 .1 in section 4.1 for the details.• We evaluate
our learning-based algorithms using two real-world
datasets: traffic load on an Internet backbone link and search query popularity. In comparison to their non-learning counterparts, our algorithms
yield performance gains that range from 18% to 71%.
We have presented a new approach for designing frequency estimation streaming algorithms by augmenting them with a learning model that exploits data properties.
We have demonstrated the benefits of our design both analytically and empirically.
We envision that our work will motivate a deeper integration of learning in algorithm design, leading to more efficient algorithms. | Data stream algorithms can be improved using deep learning, while retaining performance guarantees. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:704 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Link prediction in simple graphs is a fundamental problem in which new links between nodes are predicted based on the observed structure of the graph.
However, in many real-world applications, there is a need to model relationships among nodes which go beyond pairwise associations.
For example, in a chemical reaction, relationship among the reactants and products is inherently higher-order.
Additionally, there is need to represent the direction from reactants to products.
Hypergraphs provide a natural way to represent such complex higher-order relationships.
Even though Graph Convolutional Networks (GCN) have recently emerged as a powerful deep learning-based approach for link prediction over simple graphs, their suitability for link prediction in hypergraphs is unexplored -- we fill this gap in this paper and propose Neural Hyperlink Predictor (NHP). NHP adapts GCNs for link prediction in hypergraphs. We propose two variants of NHP --NHP-U and NHP-D -- for link prediction over undirected and directed hypergraphs, respectively.
To the best of our knowledge, NHP-D is the first method for link prediction over directed hypergraphs.
Through extensive experiments on multiple real-world datasets, we show NHP's effectiveness.
The problem of link prediction in graphs has numerous applications in the fields of social network analysis BID24 , knowledge bases BID30 , bioinformatics BID26 to name a few.
However, in many real-world problems relationships go beyond pairwise associations.
For example, in chemical reactions data the relationship representing a group of chemical compounds that can react is inherently higher-order and similarly, the co-authorship relationship in a citation network is higher-order etc.
Hypergraphs provide a natural way to model such higher-order complex relations.
Hyperlink prediction is the problem of predicting such missing higher-order relationships in a hypergraph.Besides the higher-order relationships, modeling the direction information between these relationships is also useful in many practical applications.
For example, in the chemical reactions data, in addition to predicting groups of chemical compounds which form reactants and/or products, it is also important to predict the direction between reactants and products, i.e., a group of reactants react to give a group of products.
Directed hypergraphs BID12 provide a way to model the direction information in hypergraphs.
Similar to the undirected hypergraphs, predicting the missing hyperlinks in a directed hypergraph is also useful in practical settings.
Figure 1 illustrates the difference between modeling the chemical reactions data using undirected and directed hypergraphs.
Most of the previous work on hyperlink prediction BID43 focus only on undirected hypergraphs.
In this work we focus both on undirected and directed hypergraphs.Recently, Graph Convolutional Networks (GCNs) BID21 have emerged as a powerful tool for representation learning on graphs.
GCNs have also been successfully applied for link prediction on normal graphs BID34 BID20 .
Inspired by the success of GCNs for link prediction in graphs and deep learning in general BID39 , we propose a GCN-based framework for hyperlink prediction which works for both undirected and directed hypergraphs.
We make the following contributions:Figure 1: Illustrating the difference between modeling chemical reactions data using undirected and directed hypergraphs.
To the left is the undirected hypergraph, in which both the reactants and products are present in the same hyperlink.
Whereas in the directed hypergraph (to the right), for a given reaction, the reactants are connected by one hyperlink and products are connected by another hyperlink and both these hyperlinks are connected by a directed link.•
We propose a Graph Convolutional Networks (GCN)-based framework called Neural Hyperlink Predictor (NHP) for the problem of hyperlink prediction. To
the best of our knowledge, this is the first ever deep learning based approach for this problem.• We
extend the proposed NHP for the problem of hyperlink prediction in directed hypergraphs.To the best of our knowledge, this is the first ever attempt at the problem of link prediction in directed hypergraphs.• Through
extensive experiments on multiple real-world datasets, we show the effectiveness of proposed NHP for link prediction in both undirected and directed hypergraphs.We have released NHP's source code at this anonymous location: https://anonymous.4open. science/repository/7d86231e-f6ba-4795-ae51-ac28d89f1521/.
As we can see in table 9, the standard deviations of random negative sampling are on the higher side.
This is expected as the particular choice made for negative samples decides the decision boundary for the binary classifier.
The superior AUC values of mixed in table 8 supports our intuition that it provides benefits of both positive unlabeled learning and uniform random negative sampling.
The standard deviations of mixed are much lower but still higher than positive-unlabeled learning.In general, summarising the results for all datasets, we believe that positive-unlabeled learning is superior to random negative sampling because of the higher confidence (low standard deviation) predictions.
We have introduced NHP, a novel neural approach for hyperlink prediction in both undirected and directed hypergraphs.
To the best of our knowledge, this is the first neural method for hyperlink prediction in undirected hypergraphs.
NHP is also the first method for hyperlink prediction in directed hypergraphs.
Through extensive experiments on multiple real-world datasets, we have demonstrated NHP's effectiveness over state-of-the art baselines.
Approaches that augment GCNs with attention BID38 , self-training and co-training with random walks BID23 , edge-feature learning in a dual-primal setup BID28 have been recently proposed on graph-based semi-supervised learning tasks.
Our NHP framework provides the flexibility to incorporate these approaches for more improved performance.
An interesting future direction is predicting hyperlinks in partial-order hypergraphs (Feng et al., 2018) .
We leave extending NHP framework to inductive settings as part of future work.hyperparameter value number of hidden units 16 number of hidden layers 2 dropout rate 0.5 L2 regularisation 5 × 10 −4 learning rate 0.01 non-linearity ReLU TAB1 : Hyperparameters of GCN used for all the datasets• DBLP: We used the DBLP database v4 3 .
We filtered out papers without abstracts, and processed each abstract by tokenizing it and removing stop-words removal.
Further, we filtered out papers with one author only.
This left 540532 papers.
In order to ensure that the hypergraph formed would be sufficiently dense, we found the number of papers authored by each author and took the top 1000 authors as 'selected authors'.
Then we filtered out the papers that were not authored by at least three of the selected authors.
Finally, we were left with 1590 papers by 685 of the original 1000 selected authors.
To extract word features from each of these abstracts, we took all words appearing in these abstracts with a frequency greater than 50.
Each abstract was thus represented by a 602-dimensional bag-of-words representation.For both datasets, we randomly sample |E| fake papers according to the author distribution of the existing non-fake papers (2708 and 1590 for CORA and DBLP respectively).
We randomly generated Gaussian p dimensional features for these fake papers (1433 and 602 for CORA and DBLP respectively). | We propose Neural Hyperlink Predictor (NHP). NHP adapts graph convolutional networks for link prediction in hypergraphs | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:705 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we present a method for adversarial decomposition of text representation.
This method can be used to decompose a representation of an input sentence into several independent vectors, where each vector is responsible for a specific aspect of the input sentence.
We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change.
We show that the proposed method is capable of fine-grained con- trolled change of these aspects of the input sentence.
For example, our model is capable of learning a continuous (rather than categorical) representation of the style of the sentence, in line with the reality of language use.
The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition.
Finally, we evaluate the obtained meaning embeddings on a downstream task of para- phrase detection and show that they are significantly better than embeddings of a regular autoencoder.
Despite the recent successes in using neural models for representation learning for natural language text, learning a meaningful representation of input sentences remains an open research problem.
A variety of approaches, from sequence-to-sequence models that followed the work of BID37 to the more recent proposals BID2 BID29 BID8 BID25 BID36 BID5 share one common drawback.
Namely, all of them encode the input sentence into just one single vector of a fixed size.
One way to bypass the limitations of a single vector representation is to use an attention mechanism BID3 BID40 .
We propose to approach this problem differently and design a method for adversarial decomposition of the learned input representation into multiple components.
Our method encodes the input sentence into several vectors, where each vector is responsible for a specific aspect of the sentence.In terms of learning different separable components of input representation, our work most closely relates to the style transfer work, which has been applied to a variety of different aspects of language, from diachronic language differences BID42 to authors' personalities BID24 and even sentiment BID17 BID13 .
The style transfer work effectively relies on the more classical distinction between meaning and form BID9 , which accounts for the fact that multiple surface realizations are possible for the same meaning.
For simplicity, we will use this terminology throughout the rest of the paper.Consider the case when we encode an input sentence into a meaning vector and a form vector.
We are then able to perform a controllable change of meaning or form by a simple change applied to these vectors.
For example, we can encode two sentences written in two different styles, then swap the form vectors while leaving the meaning vectors intact.
We can then generate new unique sentences with the original meaning, but written in a different style.In the present work, we propose a novel model for this type of decomposition based on adversarialmotivational training and design an architecture inspired by the GANs BID14 and adversarial autoencoders BID26 .
In addition to the adversarial loss, we use a special motivator BID0 , which, in contrast to the discriminator, is used to provide a motivational loss to encourage the model to better decomposition of the meaning and the form, as well as specific aspects of meaning.
We make all the code publicly available on GitHub 1 .We
evaluate the proposed methods for learning separate aspects of input representation on the following case studies:1. Learning
to separate out a representation of the specific diachronic slice of language. One may
express the same meaning using the Early Modern English (e.g. What would she have?) and the contemporary English ( What does she want?)2. Learning
a representation for a social register BID16 -that is, subsets of language appropriate in a given context or characteristic of a certain group of speakers. These include
formal and informal language, the language used in different genres (e.g., fiction vs. newspapers vs. academic texts), different dialects, and even literary idiostyles. We experiment
with the registers corresponding to the titles of scientific papers vs. newspaper articles.
The classifier used in the transfer strength metric achieves very high accuracy (0.832 and 0.99 for the Shakespeare and Headlines datasets correspondingly).
These results concur with the results of BID34 and BID13 , and show that the two forms in the corpora are significantly different.Following BID13 , we show the result of different configuration of the size of the form and meaning vectors on FIG2 .
Namely, we report combinations of 64 and 256-dimensional vectors.Note that the sizes of the form vector are important.
The larger is the form vector, the higher is the transfer strength, but smaller is content preservation.
This is consistent with BID13 , where they observed a similar behaviour.It is clear that the proposed method achieves significantly better transfer strength then the previously proposed model.
It also has a lower content preservation score, which means that it repeats fewer exact words from the source sentence.
Note that a low transfer strength and very high (0.9) content preservation score means that the model was not able to successfully learn to transfer the form and the target sentence is almost identical to the source sentence.
The Shakespeare dataset is the hardest for the model in terms of transfer strength, probably because it is the smallest dataset, but the proposed method performs consistently well in transfer of both form and meaning and, in contrast to the baseline.Fluency of generated sentences Note that there is no guarantee that the generated sentences would be coherent after switching the form vector.
In order to estimate how this switch affects the fluency of generated sentences, we trained a language model on the Shakespeare dataset and calculated the perplexity of the generated sentences using the original form vector and the average of form vectors of k random sentences from the opposite style (see subsubsection 5.1.1).
While the perplexity of such sentences does go up, this change is not big (6.89 vs 9.74). | A method which learns separate representations for the meaning and the form of a sentence | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:706 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits.
Aiming at understanding the healthcare providers' attitudes towards reviewing patient-generated data, we (1) conducted a focus group with a mixed group of healthcare providers.
Next, to gain the patients' perspectives, we (2) interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected.
Last, we (3) sought feedback on the visualization designs from healthcare providers who requested this exploration.
We found four factors shaping patient-generated data: data & context, patient's motivation, patient's time commitment, and patient's support circle.
Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.
Collecting patient-generated data is becoming increasingly common in chronic disease management [20] .
Patients use technological tracking tools to collect health and lifestyle data in disparate places [3] .
Both healthcare providers and patients agree that this data could be used to make smarter decisions to improve patients' quality of life and to aid providers in making decisions about patient ongoing care [36, 44, 57] .
There are already technological tools for tracking and visualizing health data such as sleep (e.g., [10] ), physical activity (e.g., [16] ), variations in weight (e.g., [37] ), and blood sugar level (e.g., [6] ).
However, most of these tracking tools are not designed to fully meet patients and healthcare providers' expectations [11] and do not support reviewing patient-generated data with healthcare providers during clinical visits.
One way to support patients in presenting their data with healthcare providers is to visualize the patient-generated data collections effectively.
Yet, we lack an understanding of what type of visualization designs can support chronic patients to present and review their health data with healthcare providers during clinical visits.
To answer this question, we explored patients' and healthcare providers' perspectives on presenting and reviewing patient data.
To extract healthcare provider requirements when reviewing patientgenerated data during a clinical visit, we conducted a focus group with a mixed group of healthcare providers.
To uncover patient stories and their approaches to tracking and presenting their health data, we interviewed eight patients with chronic conditions who actively track their health data.
Our findings revealed four factors shaping patient-generated data: data items & data context collected by patients, time commitment invested by patients to track data, patients' motivation for collecting data, and patients' support circle.
Considering these four factors, we designed various visualizations representing patient-generated data collections we gathered from our patients.
Instead of pursuing a single generalized visualization design, we designed individually tailored visualizations for each patient.
Based on our preliminary visualization designs, we proposed a design space of patient-generated data visualizations.
Next, using these individually tailored visualization designs as elicitation artifacts, we interviewed the healthcare providers who had initiated the request for this project to reflect on the designs.
Healthcare providers pointed to four use cases that they envision these visualizations could support their practice.
As a whole, the results of all our studies led to one message: the importance of designing patient-generated data visualizations by considering each patient and healthcare provider rather than designing for generalization.
However, it may seem impossible to either design a unique set of visualizations for each patient or expect patients to design their own visualizations.
We, as healthcare technology designers, need to provide patients and providers with a set of visualization designs as starting points.
This approach would let each patient and provider choose the visualization designs that work the best for them with the capacity to customize the designs based on their lifestyle, conditions, collected data, and patientprovider relationships.
Our contributions in this paper are as follow:
(1) We identified four factors shaping patient-generated data.
(2) We presented a design space of visualizations representing patient-generated data collections.
(3) We provided guidelines for designing future patient-generated data visualizations.
Effective communication of patient-generated data during clinical visits can help patients feel understood and support healthcare providers get all the necessary data they need to make proper medical decisions [26, 29] .
Our objective was to design visualizations to support patients present patient-generated data for reviewing during clinical visits.
The focus of our studies was on studying patients with chronic conditions and the healthcare providers who visit chronic patients.
The results of our patient interview studies revealed the individualities and the complexities of patient-generated data collections.
Each patient has a unique body, a highly individualized lifestyle, a different set of goals, and a personalized patient-provider relationship.
All these factors need to be considered while caring or designing for patients [19] .
How can we design only one visualization solution that can consider all these differences in patients?
Providers also differed in their principle goal of using patientgenerated data.
This has major implications on the design of visualization.
A solution that works for one provider may not work for another.
This may affect the types of visualizations we consider for them and their patients.
There are many driving forces for designing effective patient-generated data visualizations.
It is still unclear which direction works best for both patients and providers.
In software and technology design, research, and businesses, there is often the notion of designing with a generalization mindset, 'one-size-fits-all'.
The idea of designing one software or one visualization tool that can address everyone's problem may be appealing and cost-efficient, but it does not always bring validation [5] .
We echo the call from recent research [5] for the necessity to design for particulars, individuals.
Looking into medical literature and the approaches taken in healthcare services for patient care planning, we often see one-toone interactions between a patient and their healthcare providers in clinical visits [23, 34] .
This one-to-one interaction model has been practiced for centuries in medicine and is tailored depending on the individualities of each patient and their healthcare provider.
Similarly, for designing visualizations to improve patient-provider communication, we, as visualization and technology designers, should take directions from the medical literature and their practices.
We should take steps towards designing individualized tailored visualizations based on both patient and provider preferences to be able to accommodate as many patient-provider communications as possible [52] .
Perhaps one solution can be to start designing and developing many patient-generated visualizations tailored based on both the healthcare provider and the patient preferences [8] .
There have been attempts in the literature to design visualizations representing patient-generated data for some chronic conditions including, visualizing bipolar patient lived experience [45] , collaborative interactive storyboards design with chronic pediatric patients [24] , photo-based visualization to support patients with irritable bowel syndrome communicate with providers [15] .
However, designing visualizations to support chronic patients with their self-collected data is indeterminant, or in other word a wicked problem [7] , meaning there are no definitive solutions or limits to this design problem.
Our design study in this paper is a step towards tackling this wicked problem.
We followed two criteria required for a rigor design study when addressing a wicked problem [31] : 1) A rigor solution to a wicked problem needs to include multiple perspectives to shape the design problem and consider a broad set of solutions [31] .
Thus, in our study, we included the perspectives of both patients and healthcare providers.
We explored the healthcare provider perspectives on reviewing patient-generated data during clinical visits and the details of eight patients' approaches to tracking and presenting their health data.
Furthermore, we looked into a sample of our patient participants data collections to understand patients' methods of recording their data and their reasoning.
2) A design study is not and should not be reproducible; rather the solutions proposed are one of many possible solutions [31] .
However, a rigor solution to a wicked problem needs to report the process of design and analysis in a transparent way [31] .
We designed multiple alternative visualizations for each patient.
All of our visualizations together shaped a design space of variant patient-generated data representations.
We understand that depending on the patient needs, the providers' expectations, and the patient-provider relationship dynamics, a different set of visualization designs could be suitable.
Our solutions are one of the many possible solutions to this wicked problem.
Furthermore, we explained the process of design and reflection of our design in detail transparently.
We hope that the detailed design process we provided supports other researchers and designers to further tackle this wicked problem and to design patient-generated data visualizations.
In recent years, we have seen growing interests among patients with chronic conditions to track and analyze their data.
However, sharing this data with healthcare providers can be challenging due to limited time in clinical visits and the large and complex nature of patient-generated data.
We responded to a group of healthcare providers' call from a local hospital to design potential technological solutions to address the challenges of presenting, reviewing, and analyzing patient-generated data collections.
We first gained healthcare providers' perspectives through a focus group.
Then, we took an in-depth look at chronically ill patients' perspectives tracking their health data.
The individual differences among these patients promoted a design space approach where we used insights from these patients to design a space of possible tailored visualizations.
By exploring the possibilities of designing individual tailored visualizations representing patient-generated data, we have added one way that can support patients and healthcare providers when reviewing patient-generated data during clinical visits.
We hope our proposed visualizations provide patients and healthcare providers better opportunities to present, review, and gain insights on patientgenerated data.
We note that we included the perspectives of a small number of patients and healthcare providers; thus, other perspectives may not be included in our results.
However, we envision this study as a stepping stone for the call to focus more on designing technologies in healthcare for individuals.
We encourage the human-computer interaction, visualization, and healthcare communities to repeat these studies by including more patients and healthcare providers and explore designing tailored visualizations for each individual.
Then, as a community, we can move towards accumulating these perspectives and designs to empower individuals with accessible design variations.
We hope that in the long term, the results of this exploration contribute to supporting patients' and healthcare providers' in reviewing patient-generated data collections using visualizations during clinical visits. | We explored the visualization designs that can support chronic patients to present and review their health data with healthcare providers during clinical visits. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:707 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way.
While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth.
A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution.
In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor.
By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions.
We show that our model’s long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines.
Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks.
Small errors in the input and prediction can lead to significantly different object trajectories.
The orange ball could either end up on the left or right side of the wedge.
Learning to predict the physical motion of objects from data is an open area of research.
Yet, recent (hierarchical) relation network based forward dynamics predictors (Battaglia et al., 2016; Chang et al., 2016; Li et al., 2019) seem to be a promising alternative to conventional physics engines that are key components of robot control, computer vision and reinforcement learning (RL) systems.
Physics simulators, both traditional numerical solvers and learned prediction models, still suffer from insufficient accuracy in challenging scenarios.
Small errors in the input and model can lead to dramatically different object trajectories.
Take the orange ball that is falling on the blue wedge in Figure 1 .
Depending on where the orange ball starts or what bias the model has, the ball could either end up on the left or right side.
Both are valid outcomes.
However, deterministic physics engines will either predict one trajectory or the other.
While it is important to reduce errors in each prediction, it is also important to acknowledge that uncertain situations might not have one but multiple possible outcomes.
In machine learning, uncertainty-aware neural networks avoid deterministic point estimates by predicting distributions or by randomly sampling in the prediction interval.
In the context of dynamics predictions, we propose to use Monte Carlo sampling based dropout on the model weights of a learned forward dynamics predictor to model uncertainty and sample multiple plausible trajectories for an initial state.
To stabilize each trajectory and reduce error accumulation over long-time horizons, we use a state-invariant recurrent training mechanism.
By feeding back predictions as input over multiple time steps, the model becomes more robust to its own prediction errors without the need for a hidden state.
Finally, we introduce a new shape loss on the model predictions that constrains the pairwise distances between objects and object parts and greatly improves shape preservation and the stability of trajectories over long-time horizons.
Our final fully differentiable forward dynamics model is able to sample multiple, more accurate and more stable trajectories over long-time horizons compared to existing baselines.
An accurate forward dynamics predictor that is able to predict a distribution of future states can be of great importance for robotic control.
In model-free reinforcement learning, accomplishing tasks through random exploration is sample inefficient and hardly generalizable.
Model-based methods promise greater generalization abilities, but suffer from deterministic world models that are hard to learn and fail in stochastic environments.
With our stochastic forward dynamics predictor, we can move part of the sampling process into the environment, physically grounding the random exploration of model-free agents.
As the agent is able to observe multiple trajectories at a given state without actually executing multiple actions, the sample efficiency is greatly improved while the stochasticity of each state and action is implicitly learned.
We show on several control experiments that a model-free agent trained in our stochastic forward dynamics environment is not only able to better explore and learn faster but often also comes to better solutions than agents trained in deterministic environments.
In summary, (1) we propose a stochastic differentiable forward dynamics model that is able to generate multiple plausible trajectories via Monte Carlo (MC) based graph-convolutional dropout.
(2) We greatly improve the accuracy and stability of long-term predictions by proposing a new fullyconnected shape loss term and training the model recurrently end-to-end in a state-invariant way.
(3) We demonstrate how our stochastic dynamics model can be used to improve the efficiency and performance of model-free reinforcement learning agents on several physical manipulation tasks.
Qualitatively our stochastic HRN predicts plausible future trajectories; an experiment in which human subjects were asked to discriminate between ground-truth and predicted trajectories could be used to evaluate its performance quantitatively.
Even though this method does not require extensive expert knowledge, a few design decisions have to be made e.g dropout rates for training and inference.
During inference, too high of a dropout rate can lead to visually unrealistic dynamics and object interactions.
Dropout rate scheduling during training should be investigated to improve convergence of the dynamics model during training, which may improve its performance as an environment for the reinforcement learning tasks.
Possible optimizations include more complex, potentially non-linear, annealing schedules during inference, delaying the dropout rate annealing, and finding appropriate starting values.
Finding a universal schedule that can be applied to any environment and task has large potential for accelerating reinforcement learning.
Further improvements for the physics predictor are key for its use as a physical environment.
These can include improvements for: scenarios with multiple materials in one scene, penetrations during collisions that can lead to insufficient position prediction, and generalization to new scenes.
Our results show that the proposed sampling method produces physically plausible trajectories in single-and multi-object scenarios as well as across a range of materials.
The quality of roll-outs, e.g. shape prediction is not compromised by the introduced noise.
Furthermore, our model-free reinforcement learning experiments indicate that agents learning in physically stochastic environments are able to explore better and learn quicker, which confirms the quality of the sampled trajectories.
In difficult reinforcement learning scenarios, where a high level of precision is needed to get a reward, we demonstrated that dropout rate annealing is an effective method to avoid too high randomness at the same time not reducing the benefits of stochasticity for exploration in early stages of the training.
In this regard, stochastic neural physics engines offer a clear advantage over conventional physics engines. | We propose a stochastic differentiable forward dynamics predictor that is able to sample multiple physically plausible trajectories under the same initial input state and show that it can be used to train model-free policies more efficiently. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:708 |