paper_id
stringlengths 9
13
| venue
stringclasses 171
values | year
stringclasses 7
values | paper_title
stringlengths 0
188
| paper_authors
stringlengths 4
1.01k
| paper_abstract
stringlengths 0
5k
| paper_keywords
stringlengths 2
679
| review_id
stringlengths 9
12
| review_title
stringlengths 0
500
| review_rating
stringclasses 92
values | review_text
stringlengths 0
28.3k
| review_confidence
stringclasses 21
values |
---|---|---|---|---|---|---|---|---|---|---|---|
SJqaCVLxx | ICLR.cc/2017/conference | 2017 | New Learning Approach By Genetic Algorithm In A Convolutional Neural Network For Pattern Recognition | ["Mohammad Ali Mehrolhassani", "Majid Mohammadi"] | Almost all of the presented articles in the CNN are based on the error backpropagation algorithm and calculation of derivations of error, our innovative proposal refers to engaging TICA filters and NSGA-II genetic algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre-training and the second is fine-tuning. As a result, our approach impacts in learning task. | ["Deep learning", "Supervised Learning", "Optimization", "Computer vision"] | ry5St4x4x | Presentation hinders work. | 2: Strong rejection | The paper is still extremely poorly written and presented despite multiple reviewers asking to address that issue. The frequent spelling mistakes and incoherent sentences and unclear presentation make reading and understanding the paper very difficult and time consuming. Consider getting help from someone with good english and presentation skills. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SJqaCVLxx | ICLR.cc/2017/conference | 2017 | New Learning Approach By Genetic Algorithm In A Convolutional Neural Network For Pattern Recognition | ["Mohammad Ali Mehrolhassani", "Majid Mohammadi"] | Almost all of the presented articles in the CNN are based on the error backpropagation algorithm and calculation of derivations of error, our innovative proposal refers to engaging TICA filters and NSGA-II genetic algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre-training and the second is fine-tuning. As a result, our approach impacts in learning task. | ["Deep learning", "Supervised Learning", "Optimization", "Computer vision"] | ryhZnCEEx | hard to understand what is going on | 3: Clear rejection | The authors seems to have proposed a genetic algorithm for learning the features of a convolutional network (LeNet-5 to be precise). The algorithm is validated on some version of the MNIST dataset.
Unfortunately the paper is extremely hard to understand and it is not at all clear what the exact training algorithm is. Neither do the authors ever motivate why do such a training as opposed to the standard back-prop. What are its advantages/dis-advantages? Furthermore the experimental section is equally unclear. The authors seem to have merged the training and validation set of the MNIST dataset and use only a subset of it. It is not clear why is that the case and what subset they use. In addition, to the best of my understanding, the results reported are RMSE as opposed to classification error. Why is that the case?
In short, the paper is extremely hard to follow and it is not at all clear what the training algorithm is and how is it better than standard way of training. The experimental section is equally confusing and unconvincing.
Other comments:
-- The figures still say LeCun-5
-- The legends of the plots are not in english. Hence I'm not sure what is going on there.
-- The paper is riddled with typos and hard to understand phrasing. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SJqaCVLxx | ICLR.cc/2017/conference | 2017 | New Learning Approach By Genetic Algorithm In A Convolutional Neural Network For Pattern Recognition | ["Mohammad Ali Mehrolhassani", "Majid Mohammadi"] | Almost all of the presented articles in the CNN are based on the error backpropagation algorithm and calculation of derivations of error, our innovative proposal refers to engaging TICA filters and NSGA-II genetic algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre-training and the second is fine-tuning. As a result, our approach impacts in learning task. | ["Deep learning", "Supervised Learning", "Optimization", "Computer vision"] | SJHONQQEl | still difficult to understand | 3: Clear rejection | Unfortunately, this paper is very difficult to understand. The current version of this paper seems improved compared to the initial version, but still far from a finished level. I'd encourage the authors to keep editing over the language and presentation.
I also think it would be good to also try answering some of the following questions very clearly in the paper:
- What is the advantage, if any, of the proposed algorithm over SGD? What is the motivation and goal of the work beyond MNIST benchmarking?
- Why are few training examples used? Is this a scenario in which the system might have an advantage?
- Concretely describe the genetic algorithms terminology used in the algorithm descriptions, and what each term means in the context of the convolutional network.
- Try to make sure that the method, as described, can be understood by a reader without much prior background on genetic algorithms.
- A single experiment on MNIST is too small to adequately describe the algorithm performance. Consider using a second or third dataset and/or experimental application.
Much work is still needed on the paper's writing before it can be understood well enough. I hope that some of this might be useful in helping to improve. I would encourage the authors to try to find outside readers, preferably fluent in English, to work with on a frequent basis before resubmitting to another venue.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
Bkfwyw5xg | ICLR.cc/2017/conference | 2017 | Investigating Different Context Types and Representations for Learning Word Embeddings | ["Bofang Li", "Tao Liu", "Zhe Zhao", "Buzhou Tang", "Xiaoyong Du"] | The number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it's still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.
| ["Unsupervised Learning", "Natural language processing"] | rJvBo-o7x | Solid work, but inconclusive and of narrow interest | 4: Ok but not good enough - rejection | This paper investigates the issue of whether and how to use syntactic dependencies in unsupervised word representation learning models like CBOW or Skip-Gram, with a focus one the issue of bound (word+dependency type, 'She-nsubj') vs. unbound (word alone, 'She') representations for context at training time. The empirical results are extremely mixed, and no specific novel method consistently outperforms existing methods.
The paper is systematic and I have no major concerns about its soundness. However, I don't think that this paper is of broad interest to the ICLR community. The paper is focused on a fairly narrow detail of representation learning that is entirely specific to NLP, and its results are primarily negative. A short paper at an ACL conference would be a more reasonable target. | 3: The reviewer is fairly confident that the evaluation is correct |
Bkfwyw5xg | ICLR.cc/2017/conference | 2017 | Investigating Different Context Types and Representations for Learning Word Embeddings | ["Bofang Li", "Tao Liu", "Zhe Zhao", "Buzhou Tang", "Xiaoyong Du"] | The number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it's still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.
| ["Unsupervised Learning", "Natural language processing"] | rJ6EgIyNe | 6: Marginally above acceptance threshold | This paper evaluates how different context types affect the quality of word embeddings on a plethora of benchmarks.
I am ambivalent about this paper. On one hand, it continues an important line of work in decoupling various parameters from the embedding algorithms (this time focusing on context); on the other hand, I am not sure I understand what the conclusion from these experiments is. There does not appear to be a significant and consistent advantage to any one context type. Why is this? Are the benchmarks sensitive enough to detect these differences, if they exist?
While I am OK with this paper being accepted, I would rather see a more elaborate version of it, which tries to answer these more fundamental questions.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
Bkfwyw5xg | ICLR.cc/2017/conference | 2017 | Investigating Different Context Types and Representations for Learning Word Embeddings | ["Bofang Li", "Tao Liu", "Zhe Zhao", "Buzhou Tang", "Xiaoyong Du"] | The number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it's still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.
| ["Unsupervised Learning", "Natural language processing"] | Sk-BGtoEg | Belowline | 4: Ok but not good enough - rejection | This paper analyzes dependency trees vs standard window contexts for word vector learning.
While that's a good goal I believe the paper falls short of a thorough analysis of the subject matter.
It does not analyze Glove like objective functions which often work better than the algorithms used here.
It doesn't compare in absolute terms to other published vectors or models.
It fails to gain any particularly interesting insights that will modify other people's work.
It fails to push the state of the art or make available new resources for people.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SkgewU5ll | ICLR.cc/2017/conference | 2017 | GRAM: Graph-based Attention Model for Healthcare Representation Learning | ["Edward Choi", "Mohammad Taha Bahadori", "Le Song", "Walter F. Stewart", "Jimeng Sun"] | Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain:
- Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results.
- Interpretation: The representations learned by deep learning models should align with medical knowledge.
To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies.
Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism.
We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task.
Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts. | ["Deep learning", "Applications"] | BJhmMPbNl | 6: Marginally above acceptance threshold | SUMMARY.
This paper presents a method for enriching medical concepts with their parent nodes in an ontology.
The method employs an attention mechanism over the parent nodes of a medical concept to create a richer representation of the concept itself.
The rationale of this is that for infrequent medical concepts the attention mechanism will rely more on general concepts, higher in the ontology hierarchy, while for frequent ones will focus on the specific concept.
The attention mechanism is trained together with a recurrent neural network and the model accuracy is tested on two tasks.
The first task aims at prediction the diagnosis categories at each time step, while the second task aims at predicting whether or not a heart failure is likely to happen after the T-th step.
Results shows that the proposed model works well in condition of data insufficiency.
----------
OVERALL JUDGMENT
The proposed model is simple but interesting.
The ideas presented are worth to expand but there are also some points where the authors could have done better.
The learning of the representation of concepts in the ontology is a bit naive, for example the authors could have used some kind of knowledge base factorization approach to learn the concepts, or some graph convolutional approach.
I do not see why the the very general factorization methods for knowledge bases do not apply in the case of ontology learning.
I also found strange that the representation of leaves are fine tuned while the inner nodes are not, it is a specific reason to do so?
Regarding the presentation, the paper is clear and the qualitative evaluation is insightful.
----------
DETAILED COMMENTS
Figure 2. Please use the same image format with the same resolution.
| 3: The reviewer is fairly confident that the evaluation is correct |
|
SkgewU5ll | ICLR.cc/2017/conference | 2017 | GRAM: Graph-based Attention Model for Healthcare Representation Learning | ["Edward Choi", "Mohammad Taha Bahadori", "Le Song", "Walter F. Stewart", "Jimeng Sun"] | Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain:
- Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results.
- Interpretation: The representations learned by deep learning models should align with medical knowledge.
To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies.
Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism.
We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task.
Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts. | ["Deep learning", "Applications"] | ByxbiBc4g | 6: Marginally above acceptance threshold | This paper addresses the problem of data sparsity in the healthcare domain by leveraging hierarchies of medical concepts organized in ontologies. The paper focuses on sequential prediction given a patient’s medical record (a sequence of medical codes, some of which might occur very rarely). Instead of simply assigning each medical code an independent embedding before feeding it to an RNN, the proposed approach assigns each node in the medical ontology a “basic” embedding, and composes a “final” embedding for each medical code by taking a learned weighted average (via an attention mechanism) of the medical code’s ancestors in the ontology. Notably, the paper is well written and the approach is quite intuitive.
I have the following comments:
- Why is the patient’s visit taken as just the sum of medical codes found in the visit, and not say the average or a learned weighted average? Wouldn’t this bias for/against the number of codes in the visit?
- I don’t see why basic embeddings are not fine tuned as well. Did you find that to hurt performance? Do you have an explanation for that?
- Looking at Figure 2, the results seem very close and the figures are not very clear (figure (b) top is missing). Also, I am wondering how significant the differences are so it would be nice to comment on that.
Finally, I think this is an interesting application paper applying well-established deep learning techniques. The paper deals with an important issue that arises when applying deep learning models in domains with scarce data resources. However, I would like the authors to comment on what there paper offers as new insights to the ICLR community and why they think ICLR is a good avenue for their work.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
SkgewU5ll | ICLR.cc/2017/conference | 2017 | GRAM: Graph-based Attention Model for Healthcare Representation Learning | ["Edward Choi", "Mohammad Taha Bahadori", "Le Song", "Walter F. Stewart", "Jimeng Sun"] | Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain:
- Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results.
- Interpretation: The representations learned by deep learning models should align with medical knowledge.
To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies.
Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism.
We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task.
Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts. | ["Deep learning", "Applications"] | HywiEvWEx | Interesting approach for learning input representations in RNN | 6: Marginally above acceptance threshold | I read the authors' response and maintain my rating.
---
This paper introduces an approach for integrating a direct acyclic graph structure of the data into word / code embeddings, in order to leverage domain knowledge and thus help train an RNN with scarce data. It is applied to codes of medical visits. Each code is part of an ontology, which can be represented by a DAG, where codes correspond to leaf nodes, and where different codes may share common ancestors (non-leaf nodes) in the DAG. Instead of embedding merely the leaf nodes, one can also embed the non-leaf nodes, and the embeddings of the code and its ancestors can be combined using a convex sum. That convex sum can be seen as an attention mechanism over the representation. The attention weights depend on the embeddings and the weights of an MLP, meaning that the model can separate learning the code embeddings and the interaction between the codes. Embedding codes are pretrained using GloVe, then fine-tuned.
The model is properly evaluated on two medical datasets, with several variations to isolate the contribution of the DAG (GRAM or GRAM+ vs. RNN or RandomDAG) and of pretraining the embeddings (RNN+ vs RNN, GRAM+ vs GRAM). Both are shown to help achieve the best performance and the evaluation methodology seems thorough.
The paper is also well written, and the case for MLP attention instead of a plain dot product of embeddings was made by the authors.
My only two comments would be:
1) Why is there a softmax in equation 4, given that the loss is multivariate cross-entropy (in the predicted visit, several codes could be equal to 1), not a a single-class cross-entropy?
2) What is the embedding dimension m? | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rk5upnsxe | ICLR.cc/2017/conference | 2017 | Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes | ["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"] | Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution. | ["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"] | rkLxFcgNg | Overall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficient discussion. | 9: Top 15% of accepted papers, strong accept | *** Paper Summary ***
This paper proposes a unified view on normalization. The framework encompases layer normalization, batch normalization and local contrast normalization. It also suggests decorrelating the inputs through L1 regularization of the activations. Results are reported on three tasks: CIFAR classification, PTB Language models and super resolution on Berkeley dataset.
*** Review Summary ***
Overall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficient discussion.
*** Detailed Review ***
The paper is clear and reads well. It lacks a few reference to prior research. Also I am surprised that "Local Contrast Normalization" is not said anywhere, as it is a common terminology in the neural network and vision literature.
It is unclear to me why you chose to pair L1 regularization of the activation and normalization. They seem complementary. Would it make sense to apply L1 regularization to the baseline to highlight it is helpful on its own. Overall, it seems the only thing that brings a consistent improvement across all setups.
On related work, maybe it would be worthwhile to insist that Local Contrast Normalization (LCN) used to be very popular [Pinto et al, 2008, Jarret et al 2009, Sermanet et al 2012; Quoc Le 2013] and effective. It is great to connect this litterature to current work on layer normalization and batch normalization. Similarly, sparsity or group sparsity of the activation has shown effective in the past [Rozell et al 08, Kavukcuoglu et al 09] and need more exposure today.
Finally, since dropout is so popular but interact poorly with normalizer estimates, I feel it would be worthwhile to report results with dropout beyond the baseline and discuss how the different normalization scheme interact with it.
Overall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficent discussion.
*** References ***
Jarrett, Kevin, Koray Kavukcuoglu, and Yann Lecun. "What is the best multi-stage architecture for object recognition?." 2009 IEEE 12th International Conference on Computer Vision. IEEE, 2009.
Pinto, N., Cox, D., DiCarlo, J.: Why is real-world visual object recognition hard?
PLoS Comput Biol 4 (2008)
Le, Quoc V. "Building high-level features using large scale unsupervised learning." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house
numbers digit classification. In ICPR, 2012.
C. Rozell, D. Johnson, and B. Olshausen. Sparse coding via thresholding and local competition in neural circuits.Neural Computation, 2008.
K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In CVPR, 2009.
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
rk5upnsxe | ICLR.cc/2017/conference | 2017 | Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes | ["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"] | Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution. | ["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"] | HyhJVhb4l | Well written but with little novelty | 5: Marginally below acceptance threshold | This paper empirically studies multiple combinations of various tricks to improve the performance of deep neural networks on various tasks. Authors investigate various combinations of normalization techniques together with additional regularizations.
The paper makes few interesting empirical observations, such that the L1 regularizer on top of the activations is relatively useful for most of the tasks.
In general, it seems that this work can be significantly improved by providing more precise study of existing normalization techniques. Also, studying more closely the overall volumes of the summation and suppression fields (e.g. how many samples one needs to collect for a robust enough normalization) would be useful.
In more detail, the work seems to have the following issues:
* Divisive normalization, is used extensively in Krizhevsky12 (LRN). It is almost exactly the same definition as in equation 1, however with slightly different constants. Therefore claiming that it is less explored is questionable.
* It is not clear whether the Divisive normalization does subtract the mean from the activation as there is a contradiction in its definition in equation 1 and 3. This questions whether the "General Formulation of Normalization" is correct.
* In seems that Divisive normalization is used also in Jarrett09, called Contrast Normalization, with a definition more similar to equation 3 (subtracting the mean).
* In case of the RNN experiments, it would be more clear to provide the absolute size of the summation and suppression field as BN may be inferior to DN due to a small batch size.
* It is unclear what and how are measured the results shown in Table 10. Also it is unclear what are the sizes of the suppression/summation fields for the CIFAR and Super Resolution experiments.
Minor, relatively irrelevant issues:
* It is usually better to pick a stronger baseline for the tasks. The selected CIFAR model from Caffe seems to be quite far from the state of the art on the CIFAR dataset. A stronger baseline (e.g. the widely available ResNet) would allow to see whether the proposed techniques are useful for the more recent models as well.
* Double caption for Table 7/8. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rk5upnsxe | ICLR.cc/2017/conference | 2017 | Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes | ["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"] | Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution. | ["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"] | rJ3Df4vVe | Review of "NORMALIZING THE NORMALIZERS: COMPARING AND EXTENDING NETWORK NORMALIZATION SCHEMES" | 7: Good paper, accept | The authors present a unified framework for various divisive normalization schemes, and then show that a somewhat novel version of normalization does somewhat better on several tasks than some mid-strength baselines.
Pros:
* It has seemed for a while that there are a bunch of different normalization methods out there, of varying importance in varying applications, so having a standardized framework for them all, and evaluating them carefully and systematically, is a very useful contribution.
* The paper is clearly written.
* From an architectural standpoint, the actual comparisons seem well motivated. (For instance, I'm glad they tried DN* and BN* -- if they hadn't tried those, I would have wanted them too.)
Cons:
* I'm not really sure what the difference is between their new DN method and standard cross-channel local contrast normalization. (Oh, actually -- looking at the other reviews, everyone else seems to have noticed this too. I'll not beat a dead horse about this any further.)
* I'm nervous that the conclusions that they state might not hold on larger, stronger tasks, like ImageNet, and with larger deeper models. I myself have found that while with smaller models on simpler tasks (e.g. Caltech 101), contrast normalization was really useful, that it became much less useful for larger architectures on larger tasks. In fact, if I recall correctly, the original AlexNet model had a type of cross-unit normalization in it, but this was dispensed with in more recent models (I think after Zeiler and Fergus 2013) largely because it didn't contribute that much to performance but was somewhat expensive computationally. Of course, batch normalization methods have definitely been shown to contribute performance on large problems with large models, but I think it would be really important to show the same with the DN methods here, before any definite conclusion could be reached.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJVEEF9lx | ICLR.cc/2017/conference | 2017 | Learning Approximate Distribution-Sensitive Data Structures | ["Zenna Tavares", "Armando Solar-Lezama"] | We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity.
Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior.
We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability.
We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree. | ["Unsupervised Learning"] | ryztRFW4e | 4: Ok but not good enough - rejection | The paper presents a framework to formulate data-structures in a learnable way. It is an interesting and novel approach that could generalize well to interesting datastructures and algorithms. In its current state (Revision of Dec. 9th), there are two strong weaknesses remaining: analysis of related work, and experimental evidence.
Reviewer 2 detailed some of the related work already, and especially DeepMind (which I am not affiliated with) presented some interesting and highly related results with its neural touring machine and following work. While it may be of course very hard to make direct comparisons in the experimental section due to complexity of the re-implementation, it would at least be very important to mention and compare to these works conceptually.
The experimental section shows mostly qualitative results, that do not (fully) conclusively treat the topic. Some suggestions for improvements:
* It would be highly interesting to learn about the accuracy of the stack and queue structures, for increasing numbers of elements to store.
* Can a queue / stack be used in arbitrary situations of push-pop operations occuring, even though it was only trained solely with consecutive pushes / consecutive pops? Does it in this enhanced setting `diverge' at some point?
* The encoded elements from MNIST, even though in a 28x28 (binary?) space, are elements of a ten-element set, and can hence be encoded a lot more efficiently just by `parsing' them, which CNNs can do quite well. Is the NN `just' learning to do that? If so, its performance can be expected to strongly degrade when having to learn to stack more than 28*28/4=196 numbers (in case of an optimal parser and loss-less encoding). To argue more in this direction, experiments would be needed with an increasing number of stack / queue elements. Experimenting with an MNIST parsing NN in front of the actual stack/queue network could help strengthening or falsifying the claim.
* The claims about `mental representations' have very little support throughout the paper. If indication for correspondence to mental models, etc., could be found, it would allow to hold the claim. Otherwise, I would remove it from the paper and focus on the NN aspects and maybe mention mental models as motivation.
| 3: The reviewer is fairly confident that the evaluation is correct |
|
BJVEEF9lx | ICLR.cc/2017/conference | 2017 | Learning Approximate Distribution-Sensitive Data Structures | ["Zenna Tavares", "Armando Solar-Lezama"] | We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity.
Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior.
We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability.
We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree. | ["Unsupervised Learning"] | Sy20Q1MNl | Interesting direction, but not there yet. | 4: Ok but not good enough - rejection | A method for training neural networks to mimic abstract data structures is presented. The idea of training a network to satisfy an abstract interface is very interesting and promising, but empirical support is currently too weak. The paper would be significantly strengthened if the method could be shown to be useful in a realistic application, or be shown to work better than standard RNN approaches on algorithmic learning tasks.
The claims about mental representations are not well supported. I would remove the references to mind and brain, as well as the more philosophical points, or write a paper that really emphasizes one of these aspects and supports the claims. | 3: The reviewer is fairly confident that the evaluation is correct |
BJVEEF9lx | ICLR.cc/2017/conference | 2017 | Learning Approximate Distribution-Sensitive Data Structures | ["Zenna Tavares", "Armando Solar-Lezama"] | We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity.
Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior.
We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability.
We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree. | ["Unsupervised Learning"] | ryg9PB-Vg | Review | 3: Clear rejection | The paper presents a way to "learn" approximate data structures. They train neural networks (ConvNets here) to perform as an approximate abstract data structure by having an L2 loss (for the unrolled NN) on respecting the axioms of the data structure they want the NN to learn. E.g. you NN.push(8), NN.push(6), NN.push(4), the loss is proportional to the distance with what is NN.pop()ed three times and 4, 6, 8 (this example is the one of Figure 1).
There are several flaws:
- In the case of the stack: I do not see a difference between this and a seq-to-seq RNN trained with e.g. 8, 6, 4 as input sequence, to predict 4, 6, 8.
- While some of the previous work is adequately cited, there is an important body of previous work (some from the 90s) on learning Peano's axioms, stacks, queues, etc. that is not cited nor compared to. For instance [Das et al. 1992], [Wiles & Elman 1995], and more recently [Graves et al. 2014], [Joulin & Mikolov 2015], [Kaiser & Sutskever 2016]...
- Using MNIST digits, and not e.g. a categorical distribution on numbers, is adding complexity for no reason.
- (Probably the biggest flaw) The experimental section is too weak to support the claims. The figures are adequate, but there is no comparison to anything. There is also no description nor attempt to quantify a form of "success rate" of learning such data structures, for instance w.r.t the number of examples, or w.r.t to the size of the input sequences. The current version of the paper (December 9th 2016) provides, at best, anecdotal experimental evidence to support the claims of the rest of the paper.
While an interesting direction of research, I think that this paper is not experimentally sound enough for ICLR. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BysvGP5ee | ICLR.cc/2017/conference | 2017 | Variational Lossy Autoencoder | ["Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel"] | Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.
For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture.
In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN.
Our proposed VAE model allows us to have control over what the global latent code can learn and , by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the code only ``autoencodes'' data in a lossy fashion.
In addition, by leveraging autoregressive models as both prior distribution $p(z)$ and decoding distribution $p(x|z)$, we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 as well as competitive results on CIFAR10.
| ["Deep learning", "Unsupervised Learning"] | SJ6Ye6ZNg | Interesting ideas, weak evaluation. | 7: Good paper, accept | This paper introduces the notion of a "variational lossy autoencoder", where a powerful autoregressive conditional distribution on the inputs x given the latent code z is crippled in a way that forces it to use z in a meaningful way. Its three main contributions are:
(1) It gives an interesting information-theoretical insight as to why VAE-type models don't tend to take advantage of their latent representation when the conditional distribution on x given z is powerful enough.
(2) It shows that this insight can be used to efficiently train VAEs with powerful autoregressive conditional distributions such that they make use of the latent code.
(3) It presents a powerful way to parametrize the prior in the form of an autoregressive flow transformation which is equivalent to using an inverse autoregressive flow transformation on the approximate posterior.
By itself, I think the information-theoretical explanation of why VAEs do not use their latent code when the conditional distribution on x given z is powerful enough constitutes an excellent addition to our understanding of VAE-related approaches.
However, the way this intuition is empirically evaluated is a bit weak. The "crippling" method used feels hand-crafted and very task-dependent, and the qualitative evaluation of the "lossyness" of the learned representation is carried out on three datasets (MNIST, OMNIGLOT and Caltech-101 Silhouettes) which feature black-and-white images with little-to-no texture. Figures 1a and 2a do show that reconstructions discard low-level information, as observed in the slight variations in strokes between the input and the reconstruction, but such an analysis would have been more compelling with more complex image datasets. Have the authors tried applying VLAE to such datasets?
I think the Caltech101 Silhouettes benchmark should be treated with caution, as no comparison is made against other competitive approaches like IAF VAE, PixelRNN and Conv DRAW. This means that VLAE significantly outperforms the state-of-the-art in only one of the four settings examined.
A question which is very relevant to this paper is "Does a latent representation on top of an autoregressive model help improve the density modeling performance?" The paper touches this question, but very briefly: the only setting in which VLAE is compared against recent autoregressive approaches shows that it wins against PixelRNN by a small margin.
The proposal to transform the latent code with an autoregressive flow which is equivalent to parametrizing the approximate posterior with an inverse autoregressive flow transformation is also interesting. There is, however, one important distinction to be made between the two approaches: in the former, the prior over the latent code can potentially be very complex whereas in the latter the prior is limited to be a simple, factorized distribution.
It is not clear to me that having a very powerful prior is necessarily a good thing from a representation learning point of view: oftentimes we are interested in learning a representation of the data distribution which is untangled and composed of roughly independent factors of variation. The degree to which this can be achieved using something as simple as a spherical gaussian prior is up for discussion, but finding a good balance between the ability of the prior to fit the data and its usefulness as a high-level representation certainly warrants some thought. I would be interested in hearing the authors' opinion on this.
Overall, the paper introduces interesting ideas despite the flaws outlined above, but weaknesses in the empirical evaluation prevent me from recommending its acceptance.
UPDATE: The rating has been revised to a 7 following the authors' reply. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BysvGP5ee | ICLR.cc/2017/conference | 2017 | Variational Lossy Autoencoder | ["Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel"] | Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.
For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture.
In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN.
Our proposed VAE model allows us to have control over what the global latent code can learn and , by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the code only ``autoencodes'' data in a lossy fashion.
In addition, by leveraging autoregressive models as both prior distribution $p(z)$ and decoding distribution $p(x|z)$, we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 as well as competitive results on CIFAR10.
| ["Deep learning", "Unsupervised Learning"] | BkB8c0fEe | Having control over the kind of information learned is interesting and would be useful in several applications | 7: Good paper, accept | This paper proposes a Variational Autoencoder model that can discard information found irrelevant, in order to learn interesting global representations of the data. This can be seen as a lossy compression algorithm, hence the name Variational Lossy Autoencoder. To achieve such model, the authors combine VAEs with neural autoregressive models resulting in a model that has both a latent variable structure and a powerful recurrence structure.
The authors first present an insightful Bits-Back interpretation of VAE to show when and how the latent code is ignored. As it was also mentioned in the literature, they say that the autoregressive part of the model ends up explaining all structure in the data, while the latent variables are not used. Then, they propose two complementary approaches to force the latent variables to be used by the decoder. The first one is to make sure the autoregressive decoder only uses small local receptive field so the model has to use the latent code to learn long-range dependency. The second is to parametrize the prior distribution over the latent code with an autoregressive model.
They also report new state-of-the-art results on binarized MNIST (both dynamical and statically binarization), OMNIGLOT and Caltech-101 Silhouettes.
Review:
The bits-Back interpretation of VAE is a nice contribution to the community. Having novel interpretations for a model helps to better understand it and sometimes, like in this paper, highlights how it can be improved.
Having a fine-grained control over the kind of information that gets included in the learned representation can be useful for a lot of applications. For instance, in image retrieval, such learned representation could be used to retrieve objects that have similar shape no matter what texture they have.
However, the authors say they propose two complementary classes of improvements to VAE, that is the lossy code via explicit information placement (Section 3.1) and learning the prior with autoregressive flow (Section 3.2). However, they never actually showed how a VAE without AF prior but that has a PixelCNN decoder performs. What would be the impact on the latent code is no AF prior is used?
Also, it is not clear if WindowAround(i) represents only a subset of x_{<i} or it can contain any data other than x_i. The authors mentioned the window can be represented as a small rectangle adjacent to a pixel x_i, must it only contains pixels above and to the left of x_i (similar to PixelCNN)
Minor:
In Equation 8, should there be an expectation over the data distribution? | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BysvGP5ee | ICLR.cc/2017/conference | 2017 | Variational Lossy Autoencoder | ["Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel"] | Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification.
For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture.
In this paper, we present a simple but principled method to learn such global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN.
Our proposed VAE model allows us to have control over what the global latent code can learn and , by designing the architecture accordingly, we can force the global latent code to discard irrelevant information such as texture in 2D images, and hence the code only ``autoencodes'' data in a lossy fashion.
In addition, by leveraging autoregressive models as both prior distribution $p(z)$ and decoding distribution $p(x|z)$, we can greatly improve generative modeling performance of VAEs, achieving new state-of-the-art results on MNIST, OMNIGLOT and Caltech-101 as well as competitive results on CIFAR10.
| ["Deep learning", "Unsupervised Learning"] | rkaibsZ4e | 6: Marginally above acceptance threshold | This paper motivates the combination of autoregressive models with Variational Auto-Encoders and how to control the amount the amount of information stored in the latent code. The authors provide state-of-the-art results on MNIST, OMNIGLOT and Caltech-101.
I find that the insights provided in the paper, e.g. with respect to the effect of having a more powerful decoder on learning the latent code, the bit-back coding, and the lossy decoding are well-written but are not novel.
The difference between an auto-regressive prior and the inverse auto-regressive posterior is new and interesting though.
The model presented combines the recent technique of PixelRNN/PixelCNN and Variational Auto-Encoders with Inverse Auto-Regressive Flows, which enables the authors to obtain state-of-the-art results on MNIST, OMNIGLOT and Caltech-101. Given the insights provided in the paper, the authors are also able to control the amount of information contained in the latent code to an extent.
This paper gather several insight on Variational Auto-Encoders scattered through several publications in a well-written way. From these, the authors are able to obtain state-of-the-art models on small complexity datasets. Larger scale experiments will be necessary. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
SyJNmVqgg | ICLR.cc/2017/conference | 2017 | Neural Data Filter for Bootstrapping Stochastic Gradient Descent | ["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"] | Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding. | ["Reinforcement Learning", "Deep learning", "Optimization"] | rktOx2WNl | Review | 6: Marginally above acceptance threshold | This work proposes to augment normal gradient descent algorithms with a "Data Filter", that acts as a curriculum teacher by selecting which examples the trained target network should see to learn optimally. Such a filter is learned simultaneously to the target network, and trained via Reinforcement Learning algorithms receiving rewards based on the state of training with respect to some pseudo-validation set.
Stylistic comment, please use the more common style of "(Author, year)" rather than "Author (year)" when the Author is *not* referred to or used in the sentence.
E.g. "and its variants such as Adagrad Duchi et al. (2011)" should be "such as Adagrad (Duchi et al., 2011)", and "proposed in Andrychowicz et al. (2016)," should remain so.
I think the paragraph containing "What we need to do is, after seeing the mini-batch Dt of M training instances, we dynamically determine which instances in Dt are used for training and which are filtered." should be clarified. What is "seeing"? That is, you should mention explicitly that you do the forward-pass first, then compute features from that, and then decide for which examples to perform the backwards pass.
There are a few choices in this work which I do not understand:
Why wait until the end of the episode to update your reinforce policy (algorithm 2), but train your actor critic at each step (algorithm 3)? You say REINFORCE has high variance, which is true, but does not mean it cannot be trained at each step (unless you have some experiments that suggest otherwise, and if so they should be included or mentionned in the paper).
Similarly, why not train REINFORCE with the same reward as your Actor-Critic model? And vice-versa? You claim several times that a limitation of REINFORCE is that you need to wait for the episode to be over, but considering your data is i.i.d., you can make your episode be anything from a single training step, one D_t, to the whole multi-epoch training procedure.
I have a few qualms with the experimental setting:
- is Figure 2 obtained from a single (i.e. one per setup) experiment? From different initial weights? If so, there is no proper way of knowing whether results are chance or not! This is a serious concern for me.
- with most state-of-the-art work using optimization methods such as Adam and RMSProp, is it surprising that they were not experimented with.
- it is not clear what the learning rates are; how fast should the RL part adapt to the SL part? Its not clear that this was experimented with at all.
- the environment, i.e. the target network being trained, is not stationnary at all. It would have been interesting to measure how much the policy changes as a function of time. Figure 3, could both be the result of the policy adapting, or of the policy remaining fixed and the features changing (which could indicate a failure of the policy to adapt).
- in fact it is not really adressed in the paper that the environment is non-stationary, given the current setup, the distribution of features will change as the target network progresses. This has an impact on optimization.
- how is the "pseudo-validation" data, target to the policy, chosen? It should be a subset of the training data. The second paragraph of section 3.2 suggests something of the sort, but then your algorithms suggest that the same data is used to train both the policies and the networks, so I am unsure of which is what.
Overall the idea is novel and interesting, the paper is well written for the most part, but the methodology has some flaws. Clearer explanations and either more justification of the experimental choices or more experiments are needed to make this paper complete. Unless the authors convince me otherwise, I think it would be worth waiting for more experiments and submitting a very strong paper rather than presenting this (potentially powerful!) idea with weak results.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyJNmVqgg | ICLR.cc/2017/conference | 2017 | Neural Data Filter for Bootstrapping Stochastic Gradient Descent | ["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"] | Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding. | ["Reinforcement Learning", "Deep learning", "Optimization"] | HyoMSTSVl | Final Review | 4: Ok but not good enough - rejection | Final review: The writers were very responsive and I agree the reviewer2 that their experimental setup is not wrong after all and increased the score by one. But I still think there is lack of experiments and the results are not conclusive. As a reader I am interested in two things, either getting a new insight and understanding something better, or learn a method for a better performance. This paper falls in the category two, but fails to prove it with more throughout and rigorous experiments. In summary the paper lacks experiments and results are inconclusive and I do not believe the proposed method would be quite useful and hence not a conference level publication.
--
The paper proposes to train a policy network along the main network for selecting subset of data during training for achieving faster convergence with less data.
Pros:
It's well written and straightforward to follow
The algorithm has been explained clearly.
Cons:
Section 2 mentions that the validation accuracy is used as one of the feature vectors for training the NDF. This invalidates the experiments, as the training procedure is using some data from the validation set.
Only one dataset has been tested on. Papers such as this one that claim faster convergence rate should be tested on multiple datasets and network architectures to show consistency of results. Especially larger datasets as the proposed methods is going to use less training data at each iteration, it has to be shown in much larger scaler datasets such as Imagenet.
As discussed more in detail in the pre-reviews question, if the paper is claiming faster convergence then it has to compare the learning curves with other baselines such Adam. Plain SGD is very unfair comparison as it is almost never used in practice. And this is regardless of what is the black box optimizer they use. The case could be that Adam alone as black box optimizer works as well or better than Adam as black box + NDF. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SyJNmVqgg | ICLR.cc/2017/conference | 2017 | Neural Data Filter for Bootstrapping Stochastic Gradient Descent | ["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"] | Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding. | ["Reinforcement Learning", "Deep learning", "Optimization"] | SyBXdRUEx | data filtering for faster sgd | 7: Good paper, accept | Paper is easy to follow, Idea is pretty clear and makes sense.
Experimental results are hard to judge, it would be nice to have other baselines.
For faster training convergence, the question is how well tuned SGD is, I didn't
see any mentioning of learning rate schedule. Also, it would be important to test
this on other data sets. Success with filtering training data could be task dependent. | |
H1zJ-v5xl | ICLR.cc/2017/conference | 2017 | Quasi-Recurrent Neural Networks | ["James Bradbury", "Stephen Merity", "Caiming Xiong", "Richard Socher"] | Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. | ["Natural language processing", "Deep learning"] | HJcItTz4e | Review | 6: Marginally above acceptance threshold | This paper introduces the Quasi-Recurrent Neural Network (QRNN) that dramatically limits the computational burden of the temporal transitions in
sequence data. Briefly (and slightly inaccurately) model starts with the LSTM structure but removes all but the diagonal elements to the transition
matrices. It also generalizes the connections from lower layers to upper layers to general convolutions in time (the standard LSTM can be though of as a convolution with a receptive field of 1 time-step).
As discussed by the authors, the model is related to a number of other recent modifications of RNNs, in particular ByteNet and strongly-typed RNNs (T-RNN). In light of these existing models, the novelty of the QRNN is somewhat diminished, however in my opinion their is still sufficient novelty to justify publication.
The authors present a reasonably solid set of empirical results that support the claims of the paper. It does indeed seem that this particular modification of the LSTM warrants attention from others.
While I feel that the contribution is somewhat incremental, I recommend acceptance.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1zJ-v5xl | ICLR.cc/2017/conference | 2017 | Quasi-Recurrent Neural Networks | ["James Bradbury", "Stephen Merity", "Caiming Xiong", "Richard Socher"] | Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. | ["Natural language processing", "Deep learning"] | B1pm8p-Ve | Nice paper | 7: Good paper, accept | This paper introduces a novel RNN architecture named QRNN.
QNNs are similar to gated RNN , however their gate and state update functions depend only on the recent input values, it does not depend on the previous hidden state. The gate and state update functions are computed through a temporal convolution applied on the input.
Consequently, QRNN allows for more parallel computation since they have less operations in their hidden-to-hidden transition depending on the previous hidden state compared to a GRU or LSTM. However, they possibly loose in expressiveness relatively to those models. For instance, it is not clear how such a model deals with long-term dependencies without having to stack up several QRNN layers.
Various extensions of QRNN, leveraging Zoneout, Densely-connected or seq2seq with attention, are also proposed.
Authors evaluate their approach on various tasks and datasets (sentiment classification, world-level language modelling and character level machine translation).
Overall the paper is an enjoyable read and the proposed approach is interesting,
Pros:
- Address an important problem
- Nice empirical evaluation showing the benefit of their approach
- Demonstrate up to 16x speed-up relatively to a LSTM
Cons:
- Somewhat incremental novelty compared to (Balduzizi et al., 2016)
Few specific questions:
- Is densely layer necessary to obtain good result on the IMDB task. How does a simple 2-layer QRNN compare with 2-layer LSTM?
- How does the i-fo-ifo pooling perform comparatively?
- How does QRNN deal with long-term time depency? Did you try on it on simple toy task such as the copy or the adding task? | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
H1zJ-v5xl | ICLR.cc/2017/conference | 2017 | Quasi-Recurrent Neural Networks | ["James Bradbury", "Stephen Merity", "Caiming Xiong", "Richard Socher"] | Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. | ["Natural language processing", "Deep learning"] | Bk3_qAxNg | good. | 7: Good paper, accept |
This paper points out that you can take an LSTM and make the gates only a function of the last few inputs - h_t = f(x_t, x_{t-1}, ...x_{t-T}) - instead of the standard - h_t = f(x_t, h_{t-1}) -, and that if you do so the networks can run faster and work better. You're moving compute from a serial stream to a parallel stream and also making the serial stream more parallel. Unfortunately, this simple, effective and interesting concept is somewhat obscured by confusing language.
- I would encourage the authors to improve the explanation of the model.
- Another improvement might be to explicitly go over some of the big Oh calculations, or give an example of exactly where the speed improvements are coming from.
- Otherwise the experiments seem adequate and I enjoyed this paper.
This could be a high value contribution and become a standard neural network component if it can be replicated and if it turns out to work reliably in multiple settings.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJjn-Yixl | ICLR.cc/2017/conference | 2017 | Attentive Recurrent Comparators | ["Pranav Shyam", "Ambedkar Dukkipati"] | Attentive Recurrent Comparators (ARCs) are a novel class of neural networks built with attention and recurrence that learn to estimate the similarity of a set of objects by cycling through them and making observations. The observations made in one object are conditioned on the observations made in all the other objects. This allows ARCs to learn to focus on the salient aspects needed to ascertain similarity. Our simplistic model that does not use any convolutions performs comparably to Deep Convolutional Siamese Networks on various visual tasks. However using ARCs and convolutional feature extractors in conjunction produces a model that is significantly better than any other method and has superior generalization capabilities. On the Omniglot dataset, ARC based models achieve an error rate of 1.5\% in the One-Shot classification task - a 2-3x reduction compared to the previous best models. This is also the first Deep Learning model to outperform humans (4.5\%) and surpass the state of the art accuracy set by the highly specialized Hierarchical Bayesian Program Learning (HBPL) system (3.3\%). | ["Deep learning", "Computer vision"] | rJu4Ftb4x | Strong experimental results, but somewhat unclear where the improvements are coming from | 5: Marginally below acceptance threshold | This paper presents an attention based recurrent approach to one-shot learning. It reports quite strong experimental results (surpassing human performance/HBPL) on the Omniglot dataset, which is somewhat surprising because it seems to make use of very standard neural network machinery. The authors also note that other have helped verify the results (did Soumith Chintala reproduce the results?) and do provide source code.
After reading this paper, I'm left a little perplexed as to where the big performance improvements are coming from as it seems to share a lot of the same components of previous work. If the author's could report result from a broader suite of experiments like in previous work (e.g matching networks), it would much more convincing. An ablation study would also help with understanding why this model does so well. | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper |
BJjn-Yixl | ICLR.cc/2017/conference | 2017 | Attentive Recurrent Comparators | ["Pranav Shyam", "Ambedkar Dukkipati"] | Attentive Recurrent Comparators (ARCs) are a novel class of neural networks built with attention and recurrence that learn to estimate the similarity of a set of objects by cycling through them and making observations. The observations made in one object are conditioned on the observations made in all the other objects. This allows ARCs to learn to focus on the salient aspects needed to ascertain similarity. Our simplistic model that does not use any convolutions performs comparably to Deep Convolutional Siamese Networks on various visual tasks. However using ARCs and convolutional feature extractors in conjunction produces a model that is significantly better than any other method and has superior generalization capabilities. On the Omniglot dataset, ARC based models achieve an error rate of 1.5\% in the One-Shot classification task - a 2-3x reduction compared to the previous best models. This is also the first Deep Learning model to outperform humans (4.5\%) and surpass the state of the art accuracy set by the highly specialized Hierarchical Bayesian Program Learning (HBPL) system (3.3\%). | ["Deep learning", "Computer vision"] | HJfCu9Amg | The paper need more improvements to be accepted | 3: Clear rejection | This paper describes a method that estimates the similarity between a set of images by alternatively attend each image with a recurrent manner. The idea of the paper is interesting, which mimic the human's behavior. However, there are several cons of the paper:
1. The paper is now well written. There are too many 'TODO', 'CITE' in the final version of the paper, which indicates that the paper is submitted in a rush or the authors did not take much care about the paper. I think the paper is not suitable to be published with the current version.
2. The missing of the experimental results. The paper mentioned the LFW dataset. However, the paper did not provide the results on LFW dataset. (At least I did not find it in the version of Dec. 13th)
3. The experiments of Omniglot dataset are not sufficient. I suggest that the paper provides some illustrations about how the model the attend two images (e.g. the trajectory of attend). | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
BJjn-Yixl | ICLR.cc/2017/conference | 2017 | Attentive Recurrent Comparators | ["Pranav Shyam", "Ambedkar Dukkipati"] | Attentive Recurrent Comparators (ARCs) are a novel class of neural networks built with attention and recurrence that learn to estimate the similarity of a set of objects by cycling through them and making observations. The observations made in one object are conditioned on the observations made in all the other objects. This allows ARCs to learn to focus on the salient aspects needed to ascertain similarity. Our simplistic model that does not use any convolutions performs comparably to Deep Convolutional Siamese Networks on various visual tasks. However using ARCs and convolutional feature extractors in conjunction produces a model that is significantly better than any other method and has superior generalization capabilities. On the Omniglot dataset, ARC based models achieve an error rate of 1.5\% in the One-Shot classification task - a 2-3x reduction compared to the previous best models. This is also the first Deep Learning model to outperform humans (4.5\%) and surpass the state of the art accuracy set by the highly specialized Hierarchical Bayesian Program Learning (HBPL) system (3.3\%). | ["Deep learning", "Computer vision"] | HJWrxQM4x | experimental section improved but still very weak on analysis and insight | 4: Ok but not good enough - rejection | This paper introduces an attention-based recurrent network that learns to compare images by attending iteratively back and forth between a pair of images. Experiments show state-of-the-art results on Omniglot, though a large part of the performance gain comes from when extracted convolutional features are used as input.
The paper is significantly improved from the original submission and reflects changes based on pre-review questions. However, while there was an attempt made to include more qualitative results e.g. Fig. 2, it is still relatively weak and could benefit from more examples and analysis. Also, why is the attention in Fig. 2 always attending over the full character? Although it is zooming in, shouldn’t it attend to relevant parts of the character? Attending to the full character on a solid background seems a trivial solution where it is then unclear where the large performance gains are coming from.
While the paper is much more polished now, it is still lacking in details in some respects, e.g. details of the convolutional feature extractor used that gives large performance gain. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
Sy8gdB9xx | ICLR.cc/2017/conference | 2017 | Understanding deep learning requires rethinking generalization | ["Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals"] | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction
showing that simple depth two neural networks already have perfect finite
sample expressivity as soon as the number of parameters exceeds the
number of data points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | ["Deep learning"] | HJwljOv4x | 10: Top 5% of accepted papers, seminal paper | This paper offers a very interesting empirical observation regarding the memorization capacity of current large deep convolutional networks. It shows they are able to perfectly memorize full training-set input-to-label mapping, even with random labels (i.e. when label has been rendered independent of input), using the same architecture and hyper-parameters as used for training with correct labels, except for a longer time to convergence.
Extensive experiments support the main argument of the paper.
Reflexions and observations about finite-sample expressivity and implicit regularization with linear models fit logically within the main theme and are equally thought-provoking.
While this work doesn’t propose much explanations for the good generalization abilities of what it clearly established as overparameterized models,
it does compel the reader to think about the generalization problem from a different angle than how it is traditionally understood.
In my view, raising good questions and pointing to apparent paradox is the initial spark that can lead to fundamental progress in understanding. So even without providing any clear answers, I think this work is a very valuable contribution to research in the field.
Detailed question: in your solving of Eq. 3 for MNIST and CIFAR10, did you use integer y class targets, or a binary one-versus all approach yielding 10 discriminant functions (hence a different alpha vector for each class)?
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
Sy8gdB9xx | ICLR.cc/2017/conference | 2017 | Understanding deep learning requires rethinking generalization | ["Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals"] | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction
showing that simple depth two neural networks already have perfect finite
sample expressivity as soon as the number of parameters exceeds the
number of data points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | ["Deep learning"] | rJUCC0pQg | Memorization, overfitting, generalization | 10: Top 5% of accepted papers, seminal paper | the authors of this work shed light on the generalization properties of deep neural networks. Specifically, the consider various regularization methods (data augmentation, weight decay, and dropout). They also show that quality of the labels, namely label noise also significantly affects the generalization ability of the network.
There are a number of experimental results, most of which are intuitive. Here are some specific questions that were not addressed in the paper:
1. Given two different DNN architectures with the same number of parameters, why do certain architectures generalize better than others? In other words, is it enough to consider only the size (# of parameters) of the network and the size of the input (number of samples and their dimensionality), to be able to reason about the generalization properties of a given network?
2. Does it make sense to study the stability of predictions given added dropout during inference?
Finally, provided a number of experiments and results, the authors do not draw a conclusion or offer a strong insight into what is going on with generalization in DNNs or how to proceed forward. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
Sy8gdB9xx | ICLR.cc/2017/conference | 2017 | Understanding deep learning requires rethinking generalization | ["Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals"] | Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction
showing that simple depth two neural networks already have perfect finite
sample expressivity as soon as the number of parameters exceeds the
number of data points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models. | ["Deep learning"] | B1nh-uNVl | 9: Top 15% of accepted papers, strong accept | This paper presents a set of experiments where via clever use of randomization and noise addition authors demonstrate the enormous modeling (memorization) power of the deep neural networks with large enough capacity. Yet these same models have very good generalization behavior even when all obvious explicit or implicit regularizers are removed. These observations are used to argue that classical theory (VC dimension, Rademacher complexity, uniform stability) is not able to explain the generalization behavior of deep neural networks, necessitating novel theory.
This is a very interesting and thought provoking body of work and I am in complete accord with the observations and conclusions of the paper. The classical generalization theory is indeed often at a loss with complex enough model families. As the authors point out, once model families reach a point where they have capacity to memorize train sets, the classical theory does not yield useful results that could give insight into generalization behavior of these models, leaving one to empirical studies and observations.
A minor clarification comment: On page 2, “ … true labels were replaced by random labels.” Please state that random labels came from the same set as the true labels, to clarify the experiment. | 3: The reviewer is fairly confident that the evaluation is correct |
|
BJuysoFeg | ICLR.cc/2017/conference | 2017 | Revisiting Batch Normalization For Practical Domain Adaptation | ["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | ["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"] | B1atIp-Ve | An interesting paper that shows improvements, but I am not sure about its technical advantage | 5: Marginally below acceptance threshold | Overall I think this is an interesting paper which shows empirical performance improvement over baselines. However, my main concern with the paper is regarding its technical depth, as the gist of the paper can be summarized as the following: instead of keeping the batch norm mean and bias estimation over the whole model, estimate them on a per-domain basis. I am not sure if this is novel, as this is a natural extension of the original batch normalization paper. Overall I think this paper is more fit as a short workshop presentation rather than a full conference paper.
Detailed comments:
Section 3.1: I respectfully disagree that the core idea of BN is to align the distribution of training data. It does this as a side effect, but the major purpose of BN is to properly control the scale of the gradient so we can train very deep models without the problem of vanishing gradients. It is plausible that intermediate features from different datasets naturally show as different groups in a t-SNE embedding. This is not the particular feature of batch normalization: visualizing a set of intermediate features with AlexNet and one gets the same results. So the premise in section 3.1 is not accurate.
Section 3.3: I have the same concern as the other reviewer. It seems to be quite detatched from the general idea of AdaBN. Equation 2 presents an obvious argument that the combined BN-fully_connected layer forms a linear transform, which is true in the original BN case and in this case as well. I do not think it adds much theoretical depth to the paper. (In general the novelty of this paper seems low)
Experiments:
- section 4.3.1 is not an accurate measure of the "effectiveness" of the proposed method, but a verification of a simple fact: previously, we normalize the source domain features into a Gaussian distribution. the proposed method is explicitly normalizing the target domain features into the same Gaussian distribution as well. So, it is obvious that the KL divergence between these two distributions are closer - in fact, one is *explicitly* making them close. However, this does not directly correlate to the effectiveness of the final classification performance.
- section 4.3.2: the sensitivity analysis is a very interesting read, as it suggests that only a very few number of images are needed to account for the domain shift in the AdaBN parameter estimation. This seems to suggest that a single "whitening" operation is already good enough to offset the domain bias (in both cases shown, a single batch is sufficient to recover about 80% of the performance gain, although I cannot get data for even smaller number of examples from the figure). It would thus be useful to have a comparison between these approaches, and also a detailed analysis of the effect from each layer of the model - the current analysis seems a bit thin. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJuysoFeg | ICLR.cc/2017/conference | 2017 | Revisiting Batch Normalization For Practical Domain Adaptation | ["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | ["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"] | rkpVV6H4l | trivially simple yet effective | 6: Marginally above acceptance threshold | This paper proposes a simple domain adaptation technique in which batch normalization is performed separately in each domain.
Pros:
The method is very simple and easy to understand and apply.
The experiments demonstrate that the method compares favorably with existing methods on standard domain adaptation tasks.
The analysis in section 4.3.2 shows that a very small number of target domain samples are needed for adaptation of the network.
Cons:
There is little novelty -- the method is arguably too simple to be called a “method.” Rather, it’s the most straightforward/intuitive approach when using a network with batch normalization for domain adaptation. The alternative -- using the BN statistics from the source domain for target domain examples -- is less natural, to me. (I guess this alternative is what’s done in the Inception BN results in Table 1-2?)
The analysis in section 4.3.1 is superfluous except as a sanity check -- KL divergence between the distributions should be 0 when each distribution is shifted/scaled to N(0,1) by BN.
Section 3.3: it’s not clear to me what point is being made here.
Overall, there’s not much novelty here, but it’s hard to argue that simplicity is a bad thing when the method is clearly competitive with or outperforming prior work on the standard benchmarks (in a domain adaptation tradition that started with “Frustratingly Easy Domain Adaptation”). If accepted, Sections 4.3.1 and 3.3 should be removed or rewritten for clarity for a final version. | 3: The reviewer is fairly confident that the evaluation is correct |
BJuysoFeg | ICLR.cc/2017/conference | 2017 | Revisiting Batch Normalization For Practical Domain Adaptation | ["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"] | Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance. | ["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"] | rJW8h4GEl | Final review | 4: Ok but not good enough - rejection | Update: I thank the authors for their comments. I still think that the method needs more experimental evaluation: for now, it's restricted to the settings in which one can use pre-trained ImageNet model, but it's also important to show the effectiveness in scenarios where one needs to train everything from scratch. I would love to see a fair comparison of the state-of-the-art methods on toy datasets (e.g. see (Bousmalis et al., 2016), (Ganin & Lempitsky, 2015)). In my opinion, it's a crucial point that doesn't allow me to increase the rating.
This paper proposes a simple trick turning batch normalization into a domain adaptation technique. The authors show that per-batch means and variances normally computed as a part of the BN procedure are sufficient to discriminate the domain. This observation leads to an idea that adaptation for the target domain can be performed by replacing population statistics computed on the source dataset by the corresponding statistics from the target dataset.
Overall, I think the paper is more suitable for a workshop track rather than for the main conference track. My main concerns are the following:
1. Although the main idea is very simple, it feels like the paper is composed in such a way to make the main contribution less obvious (e.g. the idea could have been described in the abstract but the authors avoided doing so).
2. (This one is from the pre-review questions) The authors are using much stronger base CNN which may account for the bulk of the reported improvement. In order to prove the effectiveness of the trick, the authors would need to conduct a fair comparison against competing methods. It would be highly desirable to conduct this comparison also in the case of a model trained from scratch (as opposed to reusing ImageNet-trained networks).
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJbbOLcex | ICLR.cc/2017/conference | 2017 | TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency | ["Adji B. Dieng", "Chong Wang", "Jianfeng Gao", "John Paisley"] | In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence – both semantic and syntactic – but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation. | ["Natural language processing", "Deep learning"] | Hy_fitzEx | Nice work on feature extraction | 8: Top 50% of accepted papers, clear accept | This work combines a LDA-type topic model with a RNN and models this by having an additive effect on the predictive distribution via the topic parameters. A variational auto-encoder is used to infer the topic distribution for a given piece of text and the RNN is trained as a RNNLM. The last hidden state of the RNNLM and the topic parameters are then concatenated to use as a feature representation.
The paper is well written and easy to understand. Using the topic as an additive effect on the vocabulary allows for easy inference but intuitively I would expect the topic to affect the dynamics too, e.g. the state of the RNN. The results on using this model as a feature extractor for IMDB are quite strong. Is the RNN fine-tuned on the labelled IMDB data? However, the results for PTB are weaker. From the original paper, an ensemble of 2 LSTMs is able to match the topicRNN score. This method of jointly modelling topics and a language model seems effective and relatively easy to implement.
Finally, the IMDB result is no longer state of the art since this result appeared in May (Miyato et al., Adversarial Training Methods for Semi-Supervised Text Classification).
Some questions:
How important is the stop word modelling? What do the results look like if l_t = 0.5 for all t?
It seems surprising that the RNN was more effective than the LSTM. Was gradient clipping tried in the topicLSTM case? Do GRUs also fail to work?
It is also unfortunate that the model requires a stop-word list. Is the link in footnote 4 the one that is used in the experiments?
Does factoring out the topics in this way allow the RNN to scale better with more neurons? How reasonable does the topic distribution look for individual documents? How peaked do they tend to be? Can you show some examples of the inferred distribution? The topics look odd for IMDB with the top word of two of the topics being the same: 'campbell'. It would be interesting to compare these topics with those inferred by LDA on the same datasets.
Minor comments:
Below figure 2: GHz -> GB
\Gamma is not defined. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJbbOLcex | ICLR.cc/2017/conference | 2017 | TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency | ["Adji B. Dieng", "Chong Wang", "Jianfeng Gao", "John Paisley"] | In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence – both semantic and syntactic – but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation. | ["Natural language processing", "Deep learning"] | SJ1aaWHNx | review | 7: Good paper, accept | This paper presents TopicRNN, a combination of LDA and RNN that augments traditional RNN with latent topics by having a switching variable that includes/excludes additive effects from latent topics when generating a word.
Experiments on two tasks are performed: language modeling on PTB, and sentiment analysis on IMBD.
The authors show that TopicRNN outperforms vanilla RNN on PTB and achieves SOTA result on IMDB.
Some questions and comments:
- In Table 2, how do you use LDA features for RNN (RNN LDA features)?
- I would like to see results from LSTM included here, even though it is lower perplexity than TopicRNN. I think it's still useful to see how much adding latent topics close the gap between RNN and LSTM.
- The generated text in Table 3 are not meaningful to me. What is this supposed to highlight? Is this generated text for topic "trading"? What about the IMDB one?
- How scalable is the proposed method for large vocabulary size (>10K)?
- What is the accuracy on IMDB if the extracted features is used directly to perform classification? (instead of being passed to a neural network with one hidden state). I think this is a fairer comparison to BoW, LDA, and SVM methods presented as baselines. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJbbOLcex | ICLR.cc/2017/conference | 2017 | TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency | ["Adji B. Dieng", "Chong Wang", "Jianfeng Gao", "John Paisley"] | In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence – both semantic and syntactic – but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation. | ["Natural language processing", "Deep learning"] | S1mRre84l | 6: Marginally above acceptance threshold | This paper introduces a model that blends ideas from generative topic models with those from recurrent neural network language models. The authors evaluate the proposed approach on a document level classification benchmark as well as a language modeling benchmark and it seems to work well. There is also some analysis as to topics learned by the model and its ability to generate text. Overall the paper is clearly written and with the code promised by the authors others should be able to re-implement the approach. I have 2 potentially major questions I would ask the authors to address:
1 - LDA topic models make an exchangability (bag of words) assumption. The discussion of the generative story for TopicRNN should explicitly discuss whether this assumption is also made. On the surface it appears it is since y_t is sampled using only the document topic vector and h_t but we know that in practice h_t comes from a recurrent model that observes y_t-1. Not clear how this clean exposition of the generative model relates to what is actually done. In the Generating sequential text section it’s clear the topic model can’t generate words without using y_1 - t-1 but this seems inconsistent with the generative model specification. This needs to be shown in the paper and made clear to have a complete paper.
2 - The topic model only allows for linear interactions of the topic vector theta. It seems like this might be required to keep the generative model tractable but seems like a very poor assumption. We would expect the topic representation to have rich interactions with a language model to create nonlinear adjustments to word probabilities for a document. Please add discussion as to why this modeling choice exists and if possible how future work could modify that assumption (or explain why it’s not such a bad assumption as one might imagine)
Figure 2 colors very difficult to distinguish. | 3: The reviewer is fairly confident that the evaluation is correct |
|
H1fl8S9ee | ICLR.cc/2017/conference | 2017 | Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks | ["Stefan Depeweg", "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato", "Finale Doshi-Velez", "Steffen Udluft"] | We present an algorithm for policy search in stochastic dynamical systems using
model-based reinforcement learning. The system dynamics are described with
Bayesian neural networks (BNNs) that include stochastic input variables. These
input variables allow us to capture complex statistical
patterns in the transition dynamics (e.g. multi-modality and
heteroskedasticity), which are usually missed by alternative modeling approaches. After
learning the dynamics, our BNNs are then fed into an algorithm that performs
random roll-outs and uses stochastic optimization for policy learning. We train
our BNNs by minimizing $\alpha$-divergences with $\alpha = 0.5$, which usually produces better
results than other techniques such as variational Bayes. We illustrate the performance of our method by
solving a challenging problem where model-based approaches usually fail and by
obtaining promising results in real-world scenarios including the control of a
gas turbine and an industrial benchmark. | ["Deep learning", "Reinforcement Learning"] | BJBAxgMEg | Clear problem formulation, scalability questionable | 7: Good paper, accept | This paper considers the problem of model-based policy search. The authors
consider the use of Bayesian Neural Networks to learn a model of the environment
and advocate for the $\alpha$-divergence minimization rather than the more usual
variational Bayes.
The ability of alpha-divergence to capture bi-modality however
comes at a price and most of the paper is devoted to finding tractable approximations.
The authors therefore use the approach of Hernandez-Lobato
et al. (2016) as proxy to the alpha-divergence .
The environment/system dynamics is clearly defined as a well as the policy parametrization
(section 3) and would constitute a useful reference point for other researchers.
Simulated roll-outs, using the learned model, then provide samples of the expected
return. Since a model of the environment is available, stochastic gradient descent
can be performed in the usual way, without policy gradient estimators, via automatic
differentiation tools.
The experiments demonstrate that alpha-divergence is capable of capturing multi-model
structure which competing methods (variational Bayes and GP) would otherwise
struggle with. The proposed approach also compares favorably in a real-world
batch setting.
The paper is well-written, technically rich and combines many recent tools
into a coherent algorithm. However, the repeated use of approximations to original
quantities seems to somehow defeat the benefits of the original problem formulation.
The scalability and computational effectiveness of this approach is also questionable
and I am uncertain if many problem would warrant such complexity in their solution.
As with other Bayesian methods, the proposed approach would probably shine in low-samples
regime and in this case might be preferable to other methods in the same class (VB, GP).
| 3: The reviewer is fairly confident that the evaluation is correct |
H1fl8S9ee | ICLR.cc/2017/conference | 2017 | Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks | ["Stefan Depeweg", "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato", "Finale Doshi-Velez", "Steffen Udluft"] | We present an algorithm for policy search in stochastic dynamical systems using
model-based reinforcement learning. The system dynamics are described with
Bayesian neural networks (BNNs) that include stochastic input variables. These
input variables allow us to capture complex statistical
patterns in the transition dynamics (e.g. multi-modality and
heteroskedasticity), which are usually missed by alternative modeling approaches. After
learning the dynamics, our BNNs are then fed into an algorithm that performs
random roll-outs and uses stochastic optimization for policy learning. We train
our BNNs by minimizing $\alpha$-divergences with $\alpha = 0.5$, which usually produces better
results than other techniques such as variational Bayes. We illustrate the performance of our method by
solving a challenging problem where model-based approaches usually fail and by
obtaining promising results in real-world scenarios including the control of a
gas turbine and an industrial benchmark. | ["Deep learning", "Reinforcement Learning"] | HkkCGX7Vg | Policy search using Bayesian NNs | 6: Marginally above acceptance threshold | The authors propose a novel way of using Bayesian NNs for policy search in stochastic dynamical systems. Specifically, the authors minimize alpha-divergence with alpha=0.5 as opposed to standard VB. The authors claim that their method is the first model-based system to solve a 20 year old benchmark problem; I'm not very familiar with this literature, so it's difficult for me to assess this claim.
The paper seems technically sound. I feel the writing could be improved. The notation in sections 2-3 feels a bit dense and there are a lot of terminology / approximations introduced, which makes it hard to follow. The writing could be better structured to distinguish between novel contributions vs review of prior work. If I understand section 2.3 correctly, it's mostly a review of black box alpha divergence minimization. If so, it would probably make sense to move this to the appendix.
There was a paper at NIPS 2016 showing promising results using SGHMC for Bayesian optimization: "Bayesian optimization with robust Bayesian neural networks" by Springenberg et al. Could you comment on applicability of stochastic gradient MCMC (SGLD / SGHMC) for your setup?
Can you comment on the computational complexity of the different approaches?
Section 4.2.1: why can't you use the original data? in what sense is it fair to simulate data using another neural network? can you evaluate PSO-P on this problem? | 3: The reviewer is fairly confident that the evaluation is correct |
H1fl8S9ee | ICLR.cc/2017/conference | 2017 | Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks | ["Stefan Depeweg", "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato", "Finale Doshi-Velez", "Steffen Udluft"] | We present an algorithm for policy search in stochastic dynamical systems using
model-based reinforcement learning. The system dynamics are described with
Bayesian neural networks (BNNs) that include stochastic input variables. These
input variables allow us to capture complex statistical
patterns in the transition dynamics (e.g. multi-modality and
heteroskedasticity), which are usually missed by alternative modeling approaches. After
learning the dynamics, our BNNs are then fed into an algorithm that performs
random roll-outs and uses stochastic optimization for policy learning. We train
our BNNs by minimizing $\alpha$-divergences with $\alpha = 0.5$, which usually produces better
results than other techniques such as variational Bayes. We illustrate the performance of our method by
solving a challenging problem where model-based approaches usually fail and by
obtaining promising results in real-world scenarios including the control of a
gas turbine and an industrial benchmark. | ["Deep learning", "Reinforcement Learning"] | HkhecuWre | Useful contribution to model-based policy search with Bayesian neural networks | 7: Good paper, accept | This paper introduces an approach for model-based control of stochastic dynamical systems with policy search, based on (1) learning the stochastic dynamics of the underlying system with a Bayesian deep neural network (BNN) that allows some of its inputs to be stochastic, and (2) a policy optimization method based on simulated rollouts from the learned dynamics. BNN training is carried out using \alpha-divergence minimization, the specific form of which was introduced in previous work by the authors. Validation and comparison of the approach is undertaken on a simulated domain, as well as real-world scenarios.
The paper is tightly written, and easy to follow. Its approach to fitting Bayesian neural networks with \alpha divergence is interesting and appears novel in this context. The resulting application to model-based control appears to have significant practical impact, particularly in light of the explainability that a system model can bring to specific decisions made by the policy. As such, I think that the paper brings a valuable contribution to the literature.
That said, I have a few questions and suggestions:
1) In section 2.2, it should be explained how the random z_n input is used by the neural network: is it just concatenated to the other inputs and used as-is, or is there a special treatment?
2) Moreover, much case is made for the need to have stochastic inputs, but only a scalar input seems to be provided throughout. Is this enough? How computationally difficult would providing stochastic inputs of higher dimensionality be?
3) How important is the normality assumption in z_n? How is the variance \gamma established?
4) It is mentioned that the hidden layers of the neural network are made of rectifiers, but no further utilization of this fact is made in the paper. Is this assumption somehow important in the optimization of the alpha-divergence (beyond what we know about rectifiers to mitigate the vanishing gradient problem) ?
5) Equation (3), denominator \mathbf{y} should be \mathbf{Y} ?
6) Section 2.3: it would be helpful to have an overview or discussion of the computational complexity of training BNNs, to understand whether and when they can practicably be used.
7) Between eq (12) and (13), a citation to the statement of the time embedding theorem would be helpful, as well as an indication of how the embedding dimension should be chosen.
8) Figure 1: the subplots should have the letters by which they are referenced in the text on p. 7.
9) In section 4.2.1, it is not clear if the gas turbine data is publicly available, and if so where. In addition more details should be provided, such as the dimensionality of the variables E_t, N_t and A_t.
10) Perhaps the comparisons with Gaussian processes should include variants that support stochastic inputs, such as Girard et al. (2003), to provide some of the same modelling capabilities as what’s made use of here. At least, this strand of work should be mentioned in Section 5.
References:
Girard, A., Rasmussen, C. E., Quiñonero Candela, J., & Murray Smith, R. (2003). Gaussian process priors with uncertain inputs-application to multiple-step ahead time series forecasting. Advances in Neural Information Processing Systems, 545-552.
| 3: The reviewer is fairly confident that the evaluation is correct |
HyecJGP5ge | ICLR.cc/2017/conference | 2017 | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | ["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"] | In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements. | ["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"] | SkDONYuVx | Simple interesting modified online dictionary learning | 7: Good paper, accept | The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, they propose to add steps of atom addition, or atom deletion, in order to extent the online dictionary learning algorithm algorithm of Mairal et al. Such extensions helps to adapt the dictionary to changing properties of the data.
The online adaptation is very interesting, even if it is quite simple. The overall algorithm is quite reasonable, but not always described in sufficient details: for example, the thresholds or conditions for neuronal birth or death are not supported by a strong analysis, even if the resulting algorithm seems to perform well on quite extensive experiments.
The overall idea is nevertheless interesting (even if not completely new), and the paper generally well written and pretty easy to follow. The analysis is however quite minimal: it could have been interesting to study the evolving properties of the dictionary, to analyse its accuracy for following the changes in the data, etc.
Still: this is a nice work! | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
HyecJGP5ge | ICLR.cc/2017/conference | 2017 | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | ["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"] | In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements. | ["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"] | H1BU-VuEg | Interesting idea related to biology, good experimental validation, but more work is probably needed | 5: Marginally below acceptance threshold | The paper is interesting, it relates findings from neurscience and biology to a method for sparse coding that is adaptive and able to automatically generate (or even delete) codes as new data is coming, from a nonstationary distribution.
I have a few points to make:
1. the algorithm could be discussed more, to give a more solid view of the contribution. The technique is not novel in spirit. Codes are added when they are needed, and removed when they dont do much.
2. Is there a way to relate the organization of the data to the behavior of this method? In this paper, buildings are shown first, and natural images (which are less structured, more difficult) later. Is this just a way to perform curriculum learning? What happens when data simply changes in structure, with no apparent movement from simple to more complex (e.g. from flowers, to birds, to fish, to leaves, to trees etc)
In a way, it makes sense to see an improvement when the training data has such a structure, by going from something artificial and simpler to a more complex, less structured domain.
The paper is interesting, the idea useful with some interesting insights. I am not sure it is ready for publication yet.
| 3: The reviewer is fairly confident that the evaluation is correct |
HyecJGP5ge | ICLR.cc/2017/conference | 2017 | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | ["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"] | In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the “neuronal birth” is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. “Neuronal death” is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements. | ["Unsupervised Learning", "Computer vision", "Transfer Learning", "Optimization", "Applications"] | Syk3UmQEe | review | 5: Marginally below acceptance threshold |
I'd like to thank the authors for their detailed response and clarifications.
This work proposes new training scheme for online sparse dictionary learning. The model assumes a non-stationary flow of the incoming data. The goal (and the challenge) is to learn a model in an online manner in a way that is capable of adjusting to the new incoming data without forgetting how to represent previously seen data. The proposed approach deals with this problem by incorporating a mechanism for adding or deleting atoms in the dictionary. This procedure is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus.
The paper has two main innovations over the baseline approach (Mairal et al): (i) “neuronal birth” which represents an adaptive way of increasing the number of atoms in the dictionary (ii) "neuronal death", which corresponds to removing “useless” dictionary atoms.
Neural death is implemented by including an group-sparsity regularization to the dictionary atoms themselves (the group corresponds to a column of the dictionary). This promotes to shrink to zero atoms that are not very useful, keeping controlled the increase of the dictionary size.
I believe that the strong side of the paper is its connections with the adult neurogenesis phenomenon, which is, in my opinion a very nice feature.
The paper is very well written and easy to follow.
On the other hand, the overall technique is not very novel. Although not exactly equivalent, similar ideas have been explored. While the neural death is implemente elegantly with a sparsity-promoting regularization term, the neural birth is performed by relying on heuristics that measure how well the dictionary can represent new incoming data. Which depending on the "level" of non-stationarity in the incoming data (or presence of outliers) could be difficult to set. Still, having adaptive dictionary size is very interesting.
The authors could also cite some references in model selection literature. In particular, some ideas such as MDL have been used for automatically selecting the dictionary size (I believe this work does not address the online setting, but still its a relevant reference to have). For instance,
Ramirez, Ignacio, and Guillermo Sapiro. "An MDL framework for sparse coding and dictionary learning." IEEE Transactions on Signal Processing 60.6 (2012): 2913-2927.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
SyxeqhP9ll | ICLR.cc/2017/conference | 2017 | Calibrating Energy-based Generative Adversarial Networks | ["Zihang Dai", "Amjad Almahairi", "Philip Bachman", "Eduard Hovy", "Aaron Courville"] | In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples.
Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
We derive the analytic form of the induced solution, and analyze the properties.
In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques.
Empirically, the experiment results closely match our theoretical analysis, verifying the discriminator is able to recover the energy of data distribution. | ["Deep learning"] | ByqjAeGEg | Interesting well written paper on improving the stability of discriminators in GANs. | 7: Good paper, accept | The authors present a method for changing the objective of generative adversarial networks such that the discriminator accurately recovers density information about the underlying data distribution. In the course of deriving the changed objective they prove that stability of the discriminator is not guaranteed in the standard GAN setup but can be recovered via an additional entropy regularization term.
The paper is clearly written, including the theoretical derivation. The derivation of the additional regularization term seems valid and is well explained. The experiments also empirically seem to support the claim that the proposed changed objective results in a "better" discriminator. There are only a few issues with the paper in its current form:
- The presentation albeit fairly clear in the details following the initial exposition in 3.1 and the beginning of 3.2 fails to accurately convey the difference between the energy based view of training GANs and the standard GAN. As a result it took me several passes through the paper to understand why the results don't hold for a standard GAN. I think it would be clearer if you state the connections up-front in 3.1 (perhaps without the additional f-gan perspective) and perhaps add some additional explanation as to how c() is implemented right there or in the experiments (you may want to just add these details in the Appendix, see also comment below).
- The proposed procedure will by construction only result in an improved generator and unless I misunderstand something does not result in improved stability of GAN training. You also don't make such a claim but an uninformed reader might get this wrong impression, especially since you mention improved performance compared to Salimans et al. in the Inception score experiment. It might be worth-while mentioning this early in the paper.
- The experiments, although well designed, mainly convey qualitative results with the exception of the table in the appendix for the toy datasets. I know that evaluating GANs is in itself not an easy task but I wonder whether additional more quantitative experiments could be performed to evaluate the discriminator performance. For example: one could evaluate how well the final discriminator does separate real from fake examples, how robust its classification is to injected noise (e.g. how classification accuracy changes for noised training data). Further one might wonder whether the last layer features learned by a discriminator using the changed objective are better suited for use in auxiliary tasks (e.g. classifying objects into categories).
- Main complaint: It is completely unclear what the generator and discriminators look like for the experiments. You mention that code will be available soon but I feel like a short description at least of the form of the energy used should also appear in the paper somewhere (perhaps in the appendix).
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
SyxeqhP9ll | ICLR.cc/2017/conference | 2017 | Calibrating Energy-based Generative Adversarial Networks | ["Zihang Dai", "Amjad Almahairi", "Philip Bachman", "Eduard Hovy", "Aaron Courville"] | In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples.
Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
We derive the analytic form of the induced solution, and analyze the properties.
In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques.
Empirically, the experiment results closely match our theoretical analysis, verifying the discriminator is able to recover the energy of data distribution. | ["Deep learning"] | HJXpDiAQg | 8: Top 50% of accepted papers, clear accept | The submission explores several alternatives to provide the generator function in generative adversarial training with additional gradient information. The exposition starts by describing a general formulation about how this additional gradient information (termed K(p_gen) could be added to the generative adversarial training objective function (Equation 1). Next, the authors prove that the shape of the optimal discriminator does indeed depend on the added gradient information (Proposition 3.1), which is unsurprising. Finally, the authors propose three particular alternatives to construct K(p_gen): the negative entropy of the generator distribution, the L2 norm of the generator distribution, and a constant function (which resembles the EBGAN objective of Zhao et al, 2016).
The exposition moves then to an experimental evaluation of the method, which sets K(p_gen) to be the approximate entropy of the generator distribution. At this point, my intuition is that the objective function under study is the vanilla GAN objective, plus a regularization term that encourages diversity (high entropy) in the generator distribution. The hope of the authors is that this regularization will transform the discriminator into an estimate of the energy landscape of the data distribution.
The experimental evaluation proceeds by 1) showing the contour plots of the obtained generator distribution for a 2D problem, 2) studying the generation diversity in MNIST digits, and 3) showing some samples for CIFAR-10 and CelebA. The 2D problem results are convincing, since one can clearly observe that the discriminator scores translate into unnormalized values of the density function. The MNIST results offer good intuition also: the more prototypical digits are assigned larger scores (unnormalized densities) by the discriminator, and the less prototypical digits are assigned smaller scores. The sample experiments from Section 5.3 are less convincing, since no samples from baseline models are provided for comparison.
To this end, I would recommend the authors to clarify three aspects. First, we have seen that entropy regularization leads to a discriminator that estimates the energy landscape of the data distribution. But, how does this regularization reshape the generator function? It would be nice to see the mean MNIST digit according to the generator, and some other statistics if possible. Second, how do the samples produced by the proposed methods compare (visually speaking) to the state-of-the art? Third, what are the *shortcomings* of this method versus vanilla GAN? Too much computational overhead? What are the qualitative and quantitative differences between the two entropy estimators proposed in the manuscript?
Overall, a clearly written paper. I vote for acceptance.
As an open question to the authors: What breakthroughs should we pursue to derive a GAN objective where the discriminator is an estimate of the data density function, after training?
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
|
SyxeqhP9ll | ICLR.cc/2017/conference | 2017 | Calibrating Energy-based Generative Adversarial Networks | ["Zihang Dai", "Amjad Almahairi", "Philip Bachman", "Eduard Hovy", "Aaron Courville"] | In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples.
Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
We derive the analytic form of the induced solution, and analyze the properties.
In order to make the proposed framework trainable in practice, we introduce two effective approximation techniques.
Empirically, the experiment results closely match our theoretical analysis, verifying the discriminator is able to recover the energy of data distribution. | ["Deep learning"] | rklqb1P4x | A mathematically elegant extension of GANs to approximate density estimation | 8: Top 50% of accepted papers, clear accept | This paper addresses one of the major shortcomings of generative adversarial networks - their lack of mechanism for evaluating held-out data. While other work such as BiGANs/ALI address this by learning a separate inference network, here the authors propose to change the GAN objective function such that the optimal discriminator is also an energy function, rather than becoming uninformative at the optimal solution. Training this new objective requires gradients of the entropy of the generated data, which are difficult to approximate, and the authors propose two methods to do so, one based on nearest neighbors and one based on a variational lower bound. The results presented show that on toy data the learned discriminator/energy function closely approximates the log probability of the data, and on more complex data the discriminator give a good measure of quality for held out data.
I would say the largest shortcomings of the paper are practical issues around the scalability of the nearest neighbors approximation and accuracy of the variational approximation, which the authors acknowledge. Also, since entropy estimation and density estimation are such closely linked problems, I wonder if any practical method for EGANs will end up being equivalent to some form of approximate density estimation, exactly the problem GANs were designed to circumvent. Nonetheless, the elegant mathematical exposition alone makes the paper a worthwhile contribution to the literature.
Also, some quibbles about the writing - it seems that something is missing in the sentence at the top of pg. 5 "Finally, let's whose discriminative power". I'm not sure what the authors mean to say here. And the title undersells the paper - it makes it sound like they are making a small improvement to training an existing model rather than deriving an alternative training framework. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | ryyOs5xVg | This work seems rather preliminary in terms of experimentation and using forward modeling as pretraining has already been proposed and applied to video and text classification tasks. Discussion on related work is insufficient. The end task choice (will there be motion?) might not be the best to advocate for unsupervised training. | 3: Clear rejection | *** Paper Summary ***
The paper proposes to learn a predictive model (aka predict the next video frames given an input image) and uses the prediction from this model to improve a supervised classifier. The effectiveness of the approach is illustrated on a tower stability dataset.
*** Review Summary ***
This work seems rather preliminary in terms of experimentation and using forward modeling as pretraining has already been proposed and applied to video and text classification tasks. Discussion on related work is insufficient. The end task choice (will there be motion?) might not be the best to advocate for unsupervised training.
*** Detailed Review ***
This work seems rather preliminary. There is no comparison with alternative semi-supervised strategies. Any approach that consider the next frames as latent variables (or privileged information) can be considered. Also I am not sure if the supervised stability prediction model is actually needed once the next frame is predicted. Basically the task can be reduced to predict whether there will be motion in the video following the current frame or not (for instance comparing the first frame and last prediction or the density of gray in the top part of the video might work just as well). Also training a model to predict the presence of motion from the unsupervised data only would probably do very well. I would suggest to stir away from task where the label can be inferred trivially from the unsupervised data, meaning that unlabeled videos can be considered labeled frames in that case.
The related work section misses a discussion on previous work on learning unsupervised features from video (through predictive models, dimensionality reduction...) for helping classification of still images or videos [Fathi et al 2008; Mabahi et al 2009; Srivastava et al 2015]. More recently, Wang and Gupta (2015) have obtained excellent ImageNet results from features pre trained on unlabeled videos. Vondrick et al (2016) have shown that generative models of video can help initialize models for video classification tasks. Also in the field of text classification, pre training of classifier with a language model is a form predictive modeling, e.g. Dai & Le 2015.
I would also suggest to report test results on the dataset from Lerrer et al 2016 (I understand that you need your own videos to pre train the predictive model) but stability prediction only require still images.
Overall, I feel the experimental section is too preliminary. It would be better to focus on a task where solving the unsupervised task does not necessarily imply that the supervised task is trivially solved (or conversely that a simple rule can turn the unlabeled data into label data).
*** Reference ***
Fathi, Alireza, and Greg Mori. "Action recognition by learning mid-level motion features." Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008.
Mobahi, Hossein, Ronan Collobert, and Jason Weston. "Deep learning from temporal coherence in video." Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009.
Srivastava, Nitish, Elman Mansimov, and Ruslan Salakhutdinov. "Unsupervised learning of video representations using lstms." CoRR, abs/1502.04681 2 (2015).
A. Dai, Q.V. Le, Semi-supervised Sequence Learning, NIPS, 2015
Unsupervised learning of visual representations using videos, X Wang, A Gupta, ICCV 2015
Generating videos with scene dynamics, C Vondrick, H Pirsiavash, A Torralba, NIPS 16
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | rJLj58VNx | Good work, though more detailed analysis would be helpful | 5: Marginally below acceptance threshold | Summary
===
This paper trains models to predict whether block towers will fall down
or not. It shows that an additional model of how blocks fall down
(predicting a sequence of frames via unsupervised learning) helps the original
supervised task to generalize better.
This work constructs a synthetic dataset of block towers containing
3 to 5 blocks places in more or less precarious positions. It includes both
labels (the tower falls or not) and video frame sequences of the tower's
evolution according to a physics engine.
Three kinds of models are trained. The first (S) simply takes an image of a
tower's starting state and predicts whether it will fall or not. The
other two types (CD and CLD) take both the start state and the final state of the
tower (after it has or has not fallen) and predict whether it has fallen or not,
they only differ in how the final state is provided. One model (ConvDeconv, CD)
predicts the final frame using only the start frame and the other
(ConvLSTMDeconv) predicts a series of intermediate frames before coming
to the final frame. Both CD and CLD are unsupervised.
Each model is trained on towers of a particular heigh and tested on
towers with an unseen height. When the height of the train towers
is the same as the test tower height, all models perform roughly the same
(with in a few percentage points). However, when the test height is
greater than the train height it is extremely helpful to explicitly
model the final state of the block tower before deciding whether it has
fallen or not (via CD and CLD models).
Pros
===
* There are very clear (large) gains in accuracy from adding an unsupervised
final frame predictor. Because the generalization problem is also particularly
clear (train and test with different numbers of blocks), this makes for
a very nice toy example where unsupervised learning provides a clear benefit.
* The writing is clear.
Cons
===
My one major concern is a lack of more detailed analysis. The paper
establishes a base result, but does not explore the idea to the extent
to which I think an ICLR paper should. Two general directions for potential
analysis follow:
* Is this a limitation of the particular way the block towers are rendered?
The LSTM model could be limited by the sub-sampling strategy. It looks
like the sampling may be too coarse from the provided examples. For the
two towers in figure 2 that fall, they have fallen after only 1 or 2
time steps. How quickly do most towers fall? What happens if the LSTM
is trained at a higher frame rate? What is the frame-by-frame video
prediction accuracy of the LSTM? (Is that quantity meaningful?)
How much does performance improve if the LSTM is provided ground truth
for only the first k frames?
* Why is generalization to different block heights limited?
Is it limited by model capacity or architecture design?
What would happen if the S-type models were made wider/deeper with the CD/CLD
fall predictor capacity fixed?
Is it limited by the precise task specification?
What would happen if networks were trained with towers of multiple heights
(apparently this experiment is in the works)?
I appreciate that one experiment in this direction was provided.
Is it limited by training procedure? What if the CD/CLD models were trained
in an end-to-end manner? What if the double frame fall predictor were trained
with ground truth final frames instead of generated final frames?
Minor concerns:
* It may be asking too much to re-implement Zhang et. al. 2016 and PhysNet
for the newly proposed dataset, but it would help the paper to have baselines
which are directly comparable to the proposed results. I do not think this
is a major concern because the point of the paper is about the role of
unsupervised learning rather than creating the best fall prediction network.
* The auxiliary experiment provided is motivated as follows:
"One solution could be to train these models to predict how many blocks have
fallen instead of a binary stability label."
Is there a clear intuition for why this might make the task easier?
* Will the dataset, or code to generate it, be released?
Overall Evaluation
===
The writing, presentation, and experiments are clear and of high enough
quality for ICLR. However the experiments provide limited analysis past
the main result (see comments above). The idea is a clear extension of ideas behind unsupervised
learning (video prediction) and recent results in intuitive physics from
Lerer et. al. 2016 and Zhang et. al. 2016, so there is only moderate novelty.
However, these results would provide a valuable addition to the literation,
especially if more analysis was provided.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
rJ6DhP5xe | ICLR.cc/2017/conference | 2017 | Generalizable Features From Unsupervised Learning | ["Mehdi Mirza", "Aaron Courville", "Yoshua Bengio"] | Humans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predictors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical intuition(Lake et al., 2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict future frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configurations outside the training set distribution | ["Unsupervised Learning", "Deep learning"] | SkG-C_NNl | Good preliminary work, more controls and detailed analysis are needed. | 5: Marginally below acceptance threshold | Paper Summary
This paper evaluates the ability of two unsupervised learning models to learn a
generalizable physical intuition governing the stability of a tower of blocks.
The two models are (1) A model that predicts the final state of the tower given
the initial state, and (2) A model that predicts the sequence of states of this
tower over time given the initial state. Generalizability is evaluated by
training a model on towers made of a certain number of blocks but testing on
towers made of a different number of blocks.
Strengths
- This paper explores an interesting way to evaluate representations in terms of
their generalizability to out-of-domain data, as opposed to more standard
methods which use train and test data drawn from the same distribution.
- Experiments show that the predictions of deep unsupervised learning models on
such out-of-domain data do seem to help, even though the models were not
trained explicitly to help in this way.
Weaknesses
- Based on Fig 4, it seems that the models trained on 3 blocks (3CD, 3CLD)
``generalize" to 4 and 5 blocks. However, it is plausible that these models
only pay attention to the bottom 3 blocks of the 4 or 5 block towers in order to
determine their stability. This would work correctly a significant fraction of
the time. Therefore, the models might actually be overfitting to 3 block towers
and not really generalizing the physics of these blocks. Is this a possibility ?
I think more careful controls are needed to make the claim that the features
actually generalize. For example, test the 3 block model on a 5 block test set
but only make the 4th or 5th block unstable. If the model still works well, then
we could argue that it is actually generalizing.
- The experimental analysis seems somewhat preliminary and can be improved. In
particular, it would help to see visualizations of what the final state looks
like for models trained on 3 blocks but test on 5 (and vice-versa). That would
help understand if the generalization is really working. The discriminative
objective gives some indication of this, but might obfuscate some aspects of
physical realism that we would really want to test. In Figure 1 and 2, it is
not mentioned whether these models are being tested on the same number of blocks
they were trained for.
- It seems that the task of the predicting the final state is really a binary
task - whether or not to remove the blocks and replace them with gray
background. The places where the blocks land in case of a fall is probably quite
hard to predict, even for a human, because small perturbations can have a big
impact on the final state. It seems that in order to get a generalizable
physics model, it could help to have a high frame rate sequence prediction task.
Currently, the video is subsampled to only 5 time steps.
Quality
A more detailed analysis and careful choices of testing conditions can increase
the quality of this paper and strengthen the conclusions that can be drawn from
this work.
Clarity
The paper is well written and easy to follow.
Originality
The particular setting explored in this paper is novel.
Significance
This paper provides a valuable addition to the growing work on
transferability/generalizability as an evaluation method for unsupervised
learning. However, more detailed experiments and analysis are needed to make
this paper significant enough for an ICLR paper.
Minor comments and suggestions
- The acronym IPE is used without mentioning its expansion anywhere in the text.
- There seems to be a strong dependence on data augmentation. But given that
this is a synthetic dataset, it is not clear why more data was not generated
in the first place.
- Table 3 : It might be better to draw this as a 9 x 3 grid : 9 rows corresponding to the
models and 3 columns corresponding to the test sets. Mentioning the train set is
redundant since it is already captured in the model name. That might make it
easier to read.
Overall
This is an excellent direction to work and preliminary results look great.
However, more controls and detailed analysis are needed to make strong
conclusions from these experiments. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByZvfijeg | ICLR.cc/2017/conference | 2017 | Higher Order Recurrent Neural Networks | ["Rohollah Soltani", "Hui Jiang"] | In this paper, we study novel neural network structures to better model long term dependency in sequential data.
We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular
recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs. | ["Deep learning", "Natural language processing"] | ryxy7d84e | Interesting idea, but not ready yet | 4: Ok but not good enough - rejection | The authors of the paper explore the idea of incorporating skip connections *over time* for RNNs. Even though the basic idea is not particularly innovative, a few proposals on how to merge that information into the current hidden state with different pooling functions are evaluated. The different models are compared on two popular text benchmarks.
Some points.
1) The experiments feature only NLP and only prediction tasks. It would have been nice to see the models in other domains, i.e. modelling a conditional distribution p(y|x), not only p(x). Further, sensory input data such as audio or video would have given further insight.
2) As pointed out by other reviewers, it does not feel as if the comparisons to other models are fair. SOTA on NLP changes quickly and it is hard to place the experiments in the complete picture.
3) It is claimed that this helps long-term prediction. I think the paper lacks a corresponding analysis, as pointed out in an earlier question of mine.
4) It is claimed that LSTM trains slow and is hard to scale. For one does this not match my personal experience. Then, the prevalence of LSTM systems in production systems (e.g. Google, Baidu, Microsoft, …) clearly speaks against this.
I like the basic idea of the paper, but the points above make me think it is not ready for publication.
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByZvfijeg | ICLR.cc/2017/conference | 2017 | Higher Order Recurrent Neural Networks | ["Rohollah Soltani", "Hui Jiang"] | In this paper, we study novel neural network structures to better model long term dependency in sequential data.
We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular
recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs. | ["Deep learning", "Natural language processing"] | ByFU_yNEx | can be improved | 6: Marginally above acceptance threshold | I think the backbone of the paper is interesting and could lead to something potentially quite useful. I like the idea of connecting signal processing with recurrent network and then using tools from one setting in the other. However, while the work has nuggets of very interesting observations, I feel they can be put together in a better way.
I think the writeup and everything can be improved and I urge the authors to strive for this if the paper doesn't go through. I think some of the ideas of how to connect to the past are interesting, it would be nice to have more experiments or to try to understand better why this connections help and how. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
ByZvfijeg | ICLR.cc/2017/conference | 2017 | Higher Order Recurrent Neural Networks | ["Rohollah Soltani", "Hui Jiang"] | In this paper, we study novel neural network structures to better model long term dependency in sequential data.
We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular
recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs. | ["Deep learning", "Natural language processing"] | B1uDD-zVx | Incremental work | 3: Clear rejection | This paper proposes an idea of looking n-steps backward when modelling sequences with RNNs. The proposed RNN does not only use the previous hidden state (t-1) but also looks further back ( (t - k) steps, where k=1,2,3,4 ). The paper also proposes a few different ways to aggregate multiple hidden states from the past.
The reviewer can see few issues with this paper.
Firstly, the writing of this paper requires improvement. The introduction and abstract are wasting too much space just to explain unrelated facts or to describe already well-known things in the literature. Some of the statements written in the paper are misleading. For instance, it explains, “Among various neural network models, recurrent neural networks (RNNs) are appealing for modeling sequential data because they can capture long term dependency in sequential data using a simple mechanism of recurrent feedback” and then it says RNNs cannot actually capture long-term dependencies that well. RNNs are appealing in the first place because they can handle variable length sequences and can model temporal relationships between each symbol in a sequence. The criticism against LSTMs is hard to accept when it says: LSTMs are slow and because of the slowness, they are hard to scale at larger tasks. But we all know that some companies are already using gigantic seq2seq models for their production (LSTMs are used as building blocks in their systems). This indicates that the LSTMs can be practically used in a very large-scale setting.
Secondly, the idea proposed in the paper is incremental and not new to the field. There are other previous works that propose to use direct connections to the previous hidden states [1]. However, the previous works do not use aggregation of multiple number of previous hidden states. Most importantly, the paper fails to deliver a proper analysis on whether its main contribution is actually helpful to improve the problem posed in the paper. The new architecture is said that it handles the long-term dependencies better, however, there is no rigorous proof or intuitive design in the architecture that help us to understand why it should work better. By the design of the architecture, and speaking in very high-level, it seems like the model maybe helpful to mitigate the vanishing gradients issue by a linear factor. It is always a good practice to have at least one page to analyze the empirical findings in the paper.
Thirdly, the baseline models used in this paper are very weak. Their are plenty of other models that are trained and tested on word-level language modelling task using Penn Treebank corpus, but the paper only contains a few of outdated models. I cannot fully agree on the statement “To the best of our knowledge, this is the best performance on PTB under the same training condition”, these days, RNN-based methods usually score below 80 in terms of the test perplexity, which are far lower than 100 achieved in this paper.
[1] Zhang et al., “Architectural Complexity Measures of Recurrent Neural Networks”, NIPS’16
| 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJ9fZNqle | ICLR.cc/2017/conference | 2017 | Multi-modal Variational Encoder-Decoders | ["Iulian V. Serban", "Alexander G. Ororbia II", "Joelle Pineau", "Aaron Courville"] | Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni-modal priors — such as the multivariate Gaussian distribution — yet many real-world data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling. | ["Deep learning", "Structured prediction", "Natural language processing"] | HJeBaC-Vl | Review: Multi-modal Variational Encoder-Decoders | 3: Clear rejection | UPDATE: I have read the authors' rebuttal and also the other comments in this paper's thread. My thoughts have not changed.
The authors propose using a mixture prior rather than a uni-modal
prior for variational auto-encoders. They argue that the simple
uni-modal prior "hinders the overall expressivity of the learned model
as it cannot possibly capture more complex aspects of the data
distribution."
I find the motivation of the paper suspicious because while the prior
may be uni-modal, the posterior distribution is certainly not.
Furthermore, a uni-modal distribution on the latent variable space can
certainly still lead to the capturing of complex, multi-modal data
distributions. (As the most trivial case, take the latent variable
space to be a uniform distribution; take the likelihood to be a
point mass given by applying the true data distribution's inverse CDF
to the uniform. Such a model can capture any distribution.)
In addition, multi-modality is arguably an overfocused concept in the
literature, where the (latent variable) space is hardly anymore worth
capturing from a mixture of simple distributions when it is often a
complex nonlinear space. It is unclear from the experiments how much
the influence of the prior's multimodality influences the posterior to
capture more complex phenomena, and whether this is any better than
considering a more complex (but still reparameterizable) distribution
on the latent space.
I recommend that this paper be rejected, and encourage the authors to
more extensively study the effect of different priors.
I'd also like to make two additional comments:
While there is no length restriction at ICLR, the 14 page document can
be significantly condensed without loss of describing their innovation
or clarity. I recommend the authors do so.
Finally, I think it's important to note the controversy in this paper.
It was submitted with many significant incomplete details (e.g., no experiments,
many missing citations, a figure placed inside that was pencilled in
by hand, and several missing paragraphs). These details were not
completed until roughly a week(?) later. I recommend the chairs discuss
this in light of what should be allowed next year. | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJ9fZNqle | ICLR.cc/2017/conference | 2017 | Multi-modal Variational Encoder-Decoders | ["Iulian V. Serban", "Alexander G. Ororbia II", "Joelle Pineau", "Aaron Courville"] | Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni-modal priors — such as the multivariate Gaussian distribution — yet many real-world data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling. | ["Deep learning", "Structured prediction", "Natural language processing"] | BJw-pyfVe | Official Review | 4: Ok but not good enough - rejection | This paper proposes a piecewise constant parameterisation for neural variational models so that it could explore the multi-modality of the latent variables and develop more powerful neural models.
The experiments of neural variational document models and variational hierarchical recurrent encoder-decoder models show that the introduction of the piecewise constant distribution helps achieve better perplexity on modelling documents and seemly better performance on modelling dialogues.
The idea of having a piecewise constant prior for latent variables is interesting, but the paper is not well-written (even 14 pages long) and the design of the experiments fails to demonstrate the most of the claims.
The detailed comments are as follows:
--The author explains the limitations of the VAEs with standard Gaussian prior in the last paragraph of 3.1 and the last paragraph of 5.1. Hence, a multimodal prior would help the VAEs overcome the issues of optimisation. However, there is a lack of evidence showing the multimodality of the prior helps break the bottleneck.
--In the last paragraph of 6.1, the author claimed the decoder parameter matrix is directly affected by the latent variables. But what the connects the decoder is a combination of a piecewise constant and Gaussian latent variables. No matter what is discovered in the experiments, it only shows z=<z_gaussian, z_piecewise> is multimodal. However, z=<z_gaussian1, z_gaussian2> can be multimodal as well. None of the claims in this paragraph stands.
--In the quantitative evaluation of NVDM, there is an incremental model from z=z_gaussian to z=<z_gaussian, z_piecewise>. As the prior is learned together with the variational posterior, a more flexible prior would alleviate the regularisation imposed by the KL term. Certainly, more parameters are applied as well, so a fair comparison would at least be z=<z_gaussian, z_piecewise> and z=<z_gaussian1, z_gaussian2> which equals to a double sized z_gaussian.
--The results shown in Table 3 are implausible. I cannot believe the author used gradients to evaluate the model.
--Eq. 5 is confusing, adding a multiplication sign might help.
--3.1 can be deleted because people attending ICLR are familiar with VAEs.
Typos:
as well as the well as the generated prior-> as well as the generated prior | 4: The reviewer is confident but not absolutely certain that the evaluation is correct |
BJ9fZNqle | ICLR.cc/2017/conference | 2017 | Multi-modal Variational Encoder-Decoders | ["Iulian V. Serban", "Alexander G. Ororbia II", "Joelle Pineau", "Aaron Courville"] | Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni-modal priors — such as the multivariate Gaussian distribution — yet many real-world data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling. | ["Deep learning", "Structured prediction", "Natural language processing"] | SyrSisIVl | Review | 4: Ok but not good enough - rejection | The authors introduce some new prior and approximate posterior families for variational autoencoders, which are compatible with the reparameterization trick, as well as being capable of expressing multiple modes. They also introduce a gating mechanism between prior and posterior. They show improvements on bag of words document modeling, and dialogue response generation. The original abstract is overly strong in its assertion that a unimodal latent prior p(z) cannot fit a multimodal marginal int_z p(x|z)p(x)dz with a DNN response model p(x|z) ("it cannot possibly capture more complex aspects of the data distribution", "critical restriction", etc).
While the assertion that a unimodal latent prior is necessary to model multimodal observations is false, there are sensible motivations for the piecewise constant prior and posterior. For example, if we think of a VAE as a sort of regularized autoencoder where codes are constrained to "fill up" parts of the prior latent space, then there is a sphere-packing argument to be made that filling a Gaussian prior with Gaussian posteriors is a bad use of code space. Although the authors don't explore this much, a hypercube-based tiling of latent code space is a sensible idea.
As stated, I found the message of the paper to be quite sloppy with respect to the concept of "multi-modality." There are 3 types of multimodality at play here: multimodality in the observed marginal distribution p(x), which can be captured by any deep latent Gaussian model, multimodality in the prior p(z), which makes sense in some situations (e.g. a model of MNIST digits could have 10 prior modes corresponding to latent codes for each digit class), and multimodality in the posterior z for a given observation x_i, q(z_i|x_i). The final type of multimodality is harder to argue for, except in so far as it allows the expression of flexibly shaped distributions without highly separated modes. I believe flexible posterior approximations are important to enable fine-grained and efficient tiling of latent space, but I don't think these need to have multiple strong modes. I would be interested to see experiments demonstrating otherwise for real world data.
I think this paper should be more clear about the different types of multi-modality and which parts of their analysis demonstrate which ones. I also found it unsatisfactory that the piecewise variable analysis did not show different components of the multi-modal prior corresponding to different words, but rather just a separation between the Gaussian and the piecewise variables.
As I mention in my earlier questions, I found it surprising that the learned variance and mean for the Gaussian prior helps so dramatically with G-NVDM likelihood when the powerful networks transforming to and from latent space should make it scale-invariant. Explicitly separating out the contributions of a reimplemented base model, prior-posterior interpolation and the learned prior parameters would strengthen these experiments. Overall, the very strong improvements on the text modeling task over NVDM seem hard to understand, and I would like to see an ablation analysis of all the differences between that model and the proposed one.
The fact that adding more constant components helps for document modeling is interesting, and it would be nice to see more qualitative analysis of what the prior modes represent. I also would be surprised if posterior modes were highly separated, and if they were it would be interesting to explore if they corresponded to e.g. ambiguous word-senses.
The experiments on dialog modeling are mostly negative results, quantitatively. The observation that the the piecewise constant variables encode time-related words and the Gaussian variables encode sentiment is interesting, especially since it occurs in both sets of experiments. This is actually quite interesting, and I would be interested in seeing analysis of why this is the case. As above, I would like to see an analysis of the sorts of words that are encoded in the different prior modes and whether they correspond to e.g. groups of similar holidays or days.
In conclusion, I think the piecewise constant variational family is a good idea, although it is not well-motivated by the paper. The experimental results are very good for document modeling, but without ablation analysis against the baseline it is hard to see why they should be with such a small modification in G-NVDM. The fact that H-NVDM performs better is interesting, though. This paper should better motivate the need for different types of multi-modality, and demonstrate that those sorts of things are actually being captured by the model. As it is, the paper introduces an interesting variational family and shows that it performs better for some tasks, but the motivation and analysis is not clearly focused. To demonstrate that this is a broadly applicable family, it would also be good to do experiments on a more standard datasets like MNIST. Even without an absolute log-likelihood improvement, if the method yielded interpretable multiple modes this would be a valuable contribution. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 9