note_id
stringlengths
9
12
forum_id
stringlengths
9
13
invitation
stringlengths
40
95
content
stringlengths
44
35k
type
stringclasses
1 value
year
stringclasses
7 values
venue
stringclasses
171 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
2
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
forum_url
stringlengths
41
45
pdf_url
stringlengths
39
43
review_url
stringlengths
58
64
HkMx83V4l
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "Complex implementation of a differentiable memory as a graph with promising preliminary results.", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper proposes learning on the fly to represent a dialog as a graph (which acts as the memory), and is first demonstrated on the bAbI tasks. Graph learning is part of the inference process, though there is long term representation learning to learn graph transformation parameters and the encoding of sentences as input to the graph. This seems to be the first implementation of a differentiable memory as graph: it is much more complex than previous approaches like memory networks without significant gain in performance in bAbI tasks, but it is still very preliminary work, and the representation of memory as a graph seems much more powerful than a stack. Clarity is a major issue, but from an initial version that was constructive and better read by a computer than a human, the author proposed a hugely improved later version. This original, technically accurate (within what I understood) and thought provoking paper is worth publishing.\n\nThe preliminary results do not tell us yet if the highly complex graph-based differentiable memory has more learning or generalization capacity than other approaches. The performance on the bAbI task is comparable to the best memory networks, but still worse than more traditional rule induction (see http://www.public.asu.edu/~cbaral/papers/aaai2016-sub.pdf). This is still clearly promising.\n\n The sequence of transformation in algorithm 1 looks sensible, though the authors do not discuss any other operation ordering. In particular, it is not clear to me that you need the node state update step T_h if you have the direct reference update step T_h,direct. \n\nIt is striking that the only trick that is essential for proper performance is the \u2018direct reference\u2019 , which actually has nothing to do with the graph building process, but is rather an attention mechanism for the graph input: attention is focused on words that are relevant to the node type rather than the whole sentence. So the question \u201chow useful are all these graph operations\u201d remain. A much simpler version of a similar trick may have been proposed in the context of memory networks, also for ICLR'17 (see match type in \"LEARNING END-TO-END GOAL-ORIENTED DIALOG\" by Bordes et al)\n\n\nThe authors also mention the time and size needed to train the model: is the issue arising for learning, inference or both? A description of the actual implementation would help (no pointer to open source code is provide). The author mentions Theano in one of my questions: how are the transformations compiled in advance as units? How is the gradient back-propagated through the graph is this one is only described at runtime?\n\n\nTypo: in the appendices B.2 and B.2.1, the right side of the equation that applies the update gate has h\u2019_nu while it should be h_nu.\n\nIn the references, the author could mention the pioneering work of Lee Giles on representing graphs with RNNs.\n\nRevision: I have improved my rating for the following reasons:\n- Pointers to an highly readable and well structured Theano source is provided.\n- The delta improvement of the paper has been impressive over the review process, and I am confident this will be an impactful paper.\n- Much simpler alternatives approaches such as Memory Networks seem to be plateauing for problems such as dialog modeling, we need alternatives.\n- The architecture is this work is still too complex, but this is often as we start with DNNs, and then find simplifications that actually improve performance\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=HkMx83V4l
Hk_mPh-4e
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "", "rating": "9: Top 15% of accepted papers, strong accept", "review": "The paper proposes an extension of the Gated Graph Sequence Neural Network by including in this model the ability to produce complex graph transformations. The underlying idea is to propose a method that will be able build/modify a graph-structure as an internal representation for solving a problem, and particularly for solving question-answering problems in this paper. The author proposes 5 different possible differentiable transformations that will be learned on a training set, typically in a supervised fashion where the state of the graph is given at each timestep. A particular occurence of the model is presented that takes a sequence as an input a iteratively update an internal graph state to a final prediction, and which can be applied for solving QA tasks (e.g BaBi) with interesting results.\n\nThe approach in this paper is really interesting since the proposed model is able to maintain a representation of its current state as a complex graph, but still keeping the property of being differentiable and thus easily learnable through gradient-descent techniques. It can be seen as a succesfull attempt to mix continuous and symbolic representations. It moreover seems more general that the recent attempts made to add some 'symbolic' stuffs in differentiable models (Memory networks, NTM, etc...) since the shape of the state is not fixed here and can evolve. My main concerns is about the way the model is trained i.e by providing the state of the graph at each timestep which can be done for particular tasks (e.g Babi) only, and cannot be the solution for more complex problems. My other concern is about the whole content of the paper that would perhaps best fit a journal format and not a conference format, making the article still difficult to read due to its density. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=Hk_mPh-4e
SkibszLEx
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "Architecture which allows to learn graph->graph tasks, improves state of the art on babi", "rating": "7: Good paper, accept", "review": "The main contribution of this paper seems to be an introduction of a set of differential graph transformations which will allow you to learn graph->graph classification tasks using gradient descent. This maps naturally to a task of learning a cellular automaton represented as sequence of graphs. In that task, the graph of nodes grows at each iteration, with nodes pointing to neighbors and special nodes 0/1 representing the values. Proposed architecture allows one to learn this sequence of graphs, although in the experiment, this task (Rule 30) was far from solved.\n\nThis idea is combined with ideas from previous papers (GGS-NN) to allow the model to produce textual output rather than graph output, and use graphs as intermediate representation, which allows it to beat state of the art on BaBi tasks. ", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=SkibszLEx
SJKENmk4l
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "The topic of the paper, model-based RL with a learned model, is important and timely. The paper is well written. I feel that the presented results are too incremental. Augmenting the frame prediction network with another head that predicts the reward is a very sensible thing to do. However neither the methodology not the results are novel / surprising, given that the original method of [Oh et al. 2015] already learns to successfully increment score counters in predicted frames in many games.\n\nI\u2019m very much looking forward to seeing the results of applying the learned joint model of frames and rewards to model-based RL as proposed by the authors. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=SJKENmk4l
ryuwhyQ4e
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "Final Review", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces an additional reward-predicting head to an existing NN architecture for video frame prediction. In Atari game playing scenarios, the authors show that this model can successfully predict both reward and next frames.\n\nPros:\n- Paper is well written and easy to follow.\n- Model is clear to understand.\n\nCons:\n- The model is incrementally different than the baseline. The authors state that their purpose is to establish a pre-condition, which they achieve. But this makes the paper quite limited in scope.\n\nThis paper reads like the start of a really good long paper, or a good short paper. Following through on the future work proposed by the authors would make a great paper. As it stands, the paper is a bit thin on new contributions.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=ryuwhyQ4e
SkchXXWVe
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "Well written paper with a clear focus and interesting future work proposal but with an overall minor contribution.", "rating": "4: Ok but not good enough - rejection", "review": "The paper extends a recently proposed video frame prediction method with reward prediction in order to learn the unknown system dynamics and reward structure of an environment. The method is tested on several Atari games and is able to predict the reward quite well within a range of about 50 steps. The paper is very well written, focussed and is quite clear about its contribution to the literature. The experiments and methods are sound. However, the results are not really surprising given that the system state and the reward are linked deterministically in Atari games. In other words, we can always decode the reward from a network that successfully encodes future system states in its latent representation. The contribution of the paper is therefore minor. The paper would be much stronger if the authors could include experiments on the two future work directions they suggest in the conclusions: augmenting training with artificial samples and adding Monte-Carlo tree search. The suggestions might decrease the number of real-world training samples and increase performance, both of which would be very interesting and impactful.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=SkchXXWVe
r1w-zAZ4e
r10FA8Kxg
ICLR.cc/2017/conference/-/paper102/official/review
{"title": "Experimental comparison of shallow, deep, and (non)-convolutional architectures with a fixed parameter budget", "rating": "7: Good paper, accept", "review": "This paper aims to investigate the question if shallow non-convolutional networks can be as affective as deep convolutional ones for image classification, given that both architectures use the same number of parameters. \nTo this end the authors conducted a series of experiments on the CIFAR10 dataset.\nThey find that there is a significant performance gap between the two approaches, in favour of deep CNNs. \nThe experiments are well designed and involve a distillation training approach, and the results are presented in a comprehensive manner.\nThey also observe (as others have before) that student models can be shallower than the teacher model from which they are trained for comparable performance.\n\nMy take on these results is that they suggest that using (deep) conv nets is more effective, since this model class encodes a form of a-prori or domain knowledge that images exhibit a certain degree of translation invariance in the way they should be processed for high-level recognition tasks. The results are therefore perhaps not quite surprising, but not completely obvious either.\n\nAn interesting point on which the authors comment only very briefly is that among the non-convolutional architectures the ones using 2 or 3 hidden layers outperform those with 1, 4 or 5 hidden layers. Do you have an interpretation / hypothesis of why this is the case? It would be interesting to discuss the point a bit more in the paper.\n\nIt was not quite clear to me why were the experiments were limited to use 30M parameters at most. None of the experiments in Figure 1 seem to be saturated. Although the performance gap between CNN and MLP is large, I think it would be worthwhile to push the experiment further for the final version of the paper.\n\nThe authors state in the last paragraph that they expect shallow nets to be relatively worse in an ImageNet classification experiment. \nCould the authors argue why they think this to be the case? \nOne could argue that the much larger training dataset size could compensate for shallow and/or non-convolutional choices of the architecture. \nSince MLPs are universal function approximators, one could understand architecture choices as expressions of certain priors over the function space, and in a large-data regimes such priors could be expected to be of lesser importance.\nThis issue could for example be examined on ImageNet when varying the amount of training data.\nAlso, the much higher resolution of ImageNet images might have a non-trivial impact on the CNN-MLP comparison as compared to the results established on the CIFAR10 dataset.\n\nExperiments on a second data set would also help to corroborate the findings, demonstrating to what extent such findings are variable across datasets.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
["Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson", "Rich Caruana"]
Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.
["Deep learning", "Transfer Learning"]
https://openreview.net/forum?id=r10FA8Kxg
https://openreview.net/pdf?id=r10FA8Kxg
https://openreview.net/forum?id=r10FA8Kxg&noteId=r1w-zAZ4e
BkaSqlzEe
r10FA8Kxg
ICLR.cc/2017/conference/-/paper102/official/review
{"title": "Experimental paper with interesting results. Well written. Solid experiments. ", "rating": "7: Good paper, accept", "review": "Description.\nThis paper describes experiments testing whether deep convolutional networks can be replaced with shallow networks with the same number of parameters without loss of accuracy. The experiments are performed on he CIFAR 10 dataset where deep convolutional teacher networks are used to train shallow student networks using L2 regression on logit outputs. The results show that similar accuracy on the same parameter budget can be only obtained when multiple layers of convolution are used. \n\nStrong points.\n- The experiments are carefully done with thorough selection of hyperparameters. \n- The paper shows interesting results that go partially against conclusions from the previous work in this area (Ba and Caruana 2014).\n- The paper is well and clearly written.\n\nWeak points:\n- CIFAR is still somewhat toy dataset with only 10 classes. It would be interesting to see some results on a more challenging problem such as ImageNet. Would the results for a large number of classes be similar?\n\nOriginality:\n- This is mainly an experimental paper, but the question it asks is interesting and worth investigation. The experimental results are solid and provide new insights.\n\nQuality:\n- The experiments are well done.\n\nClarity:\n- The paper is well written and clear.\n\nSignificance:\n- The results go against some of the conclusions from previous work, so should be published and discussed.\n\nOverall:\nExperimental paper with interesting results. Well written. Solid experiments. \n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
["Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson", "Rich Caruana"]
Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.
["Deep learning", "Transfer Learning"]
https://openreview.net/forum?id=r10FA8Kxg
https://openreview.net/pdf?id=r10FA8Kxg
https://openreview.net/forum?id=r10FA8Kxg&noteId=BkaSqlzEe
B15BdW8Vx
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "interesting extension of the result of Choromanska et al. but too incremental", "rating": "3: Clear rejection", "review": "This paper shows how spin glass techniques that were introduced in Choromanska et al. to analyze surface loss of deep neural networks can be applied to deep residual networks. This is an interesting contribution but it seems to me that the results are too similar to the ones in Choromanska et al. and thus the novelty is seriously limited. Main theoretical techniques described in the paper were already introduced and main theoretical results mentioned there were in fact already proved. The authors also did not get rid of lots of assumptions from Choromanska et al. (path-independence, assumptions about weights distributions, etc.).", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=B15BdW8Vx
rkva93GNg
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "Interesting theoretical analysis (with new supporting experiments) but presented in a slightly confusing fashion.", "rating": "7: Good paper, accept", "review": "Summary:\nIn this paper, the authors study ResNets through a theoretical formulation of a spin glass model. The conclusions are that ResNets behave as an ensemble of shallow networks at the start of training (by examining the magnitude of the weights for paths of a specific length) but this changes through training, through which the scaling parameter C (from assumption A4) increases, causing it to behave as an ensemble of deeper and deeper networks.\n\nClarity:\nThis paper was somewhat difficult to follow, being heavy in notation, with perhaps some notation overloading. A summary of some of the proofs in the main text might have been helpful.\n\nSpecific Comments:\n- In the proof of Lemma 2, I'm not sure where the sequence beta comes from (I don't see how it follows from 11?)\n\n- The ResNet structure used in the paper is somewhat different from normal with multiple layers being skipped? (Can the same analysis be used if only one layer is skipped? It seems like the skipping mostly affects the number of paths there are of a certain length?)\n\n- The new experiments supporting the scale increase in practice are interesting! I'm not sure about Theorems 3, 4 necessarily proving this link theoretically however, particularly given the simplifying assumption at the start of Section 4.2?\n\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=rkva93GNg
ryTj8pINe
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "promising insightful results", "rating": "7: Good paper, accept", "review": "\nThis paper extend the Spin Glass analysis of Choromanska et al. (2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets.\n\nThe paper is well-written with many insightful explanation of results. Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al. (2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets. \n\nIt is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature. Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim. At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper.\n\nThis work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too).", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=ryTj8pINe
BkcY-CZNl
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Updated Review", "rating": "7: Good paper, accept", "review": "The paper reports several connections between the image representations in state-of-the are object recognition networks and findings from human visual psychophysics:\n1) It shows that the mean L1 distance in the feature space of certain CNN layers is predictive of human noise-detection thresholds in natural images.\n2) It reports that for 3 different 2-AFC tasks for which there exists a condition that is hard and one that is easy for humans, the mutual information between decision label and quantised CNN activations is usually higher in the condition that is easier for humans.\n3) It reproduces the general bandpass nature of contrast/frequency detection sensitivity in humans. \n\nWhile these findings appear interesting, they are also rather anecdotal and some of them seem to be rather trivial (e.g. findings in 2). To make a convincing statement it would be important to explore what aspects of the CNN lead to the reported findings. One possible way of doing that could be to include good baseline models to compare against. As I mentioned before, one such baseline should be reasonable low-level vision model. Another interesting direction would be to compare the results for the same network at different training stages.\n\nIn that way one might be able to find out which parts of the reported results can be reproduced by simple low-level image processing systems, which parts are due to the general deep network\u2019s architecture and which parts arise from the powerful computational properties (object recognition performance) of the CNNs.\n\nIn conclusion, I believe that establishing correspondences between state-of-the art CNNs and human vision is a potentially fruitful approach. However to make a convincing point that found correspondences are non-trivial, it is crucial to show that non-trivial aspects of the CNN lead to the reported findings, which was not done. Therefore, the contribution of the paper is limited since I cannot judge whether the findings really tell me something about a unique relation between high-performing CNNs and the human visual system.\n\nUPDATE:\n\nThank you very much for your extensive revision and inclusion of several of the suggested baselines. \nThe results of the baseline models often raise more questions and make the interpretation of the results more complex, but I feel that this reflects the complexity of the topic and makes the work rather more worthwhile. \n\nOne further suggestion: As the experiments with the snapshots of the CaffeNet shows, the direct relationship between CNN performance and prediction accuracy of biological vision known from Yamins et al. 2014 and Cadieu et al. 2014 does not necessarily hold in your experiments. I think this should be discussed somewhere in the paper.\n\nAll in all, I think that the paper now constitutes a decent contribution relating state-of-the art CNNs to human psychophysics and I would be happy for this work to be accepted.\n\nI raise the my rating for this paper to 7.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=BkcY-CZNl
H19W6GPVl
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Review of \"Human Perception in Computer Vision\"", "rating": "6: Marginally above acceptance threshold", "review": "The author works to compare DNNs to human visual perception, both quantitatively and qualitatively. \n\nTheir first result involves performing a psychophysical experiment both on humans and on a model and then comparing the results (actually I think the psychophysical data was collected in a different work, and is just used here). The specific psychophysical experiment determined, separately for each of a set of approx. 1110 images, what the noise level of additive noise would have to be to make a just-noticeable-difference for humans in discriminating the noiseless image from the noisy one. The authors then define a metric on neural networks that allows them to measure what they posit might be a similar property for the networks. They then correlate the pattern of noise levels between neural networks that the humans. Deep neural networks end up being much better predictors of the human pattern of noise levels than simpler measure of image perturbation (e.g. RMS contrast). \n\nA second result involves comparing DNNs to humans in terms of their pattern errors in a series of highly controlled experiments using stimuli that illustrate classic properties of human visual processing -- including segmentation, crowding and shape understanding. They then used an information-theoretic single-neuron metric of discriminability to assess similar patterns of errors for the DNNs. Again, top layers of DNNs were able to reproduce the human patterns of difficulty across stimuli, at least to some extent. \n\nA third result involves comparing DNNs to humans in terms of their pattern of contrast sensitivity across a series of sine-grating images at different frequencies. (There is a classic result from vision research as to what this pattern should be, so it makes a natural target for comparison to models.) The authors define a DNN correlate for the propertie in terms of the cross-neuron average of the L1-distance between responses to a blank image and responses to a sinuisoid of each contrast and frequency. They then qualitatively compare the results of this metric for DNNs models to known results from the literature on humans, finding that, like humans, there is an apparent bandpass response for low-contrast gratings and a mostly constant response at high contrast. \n\nPros:\n * The general concept of comparing deep nets to psychophysical results in a detailed, quantitative way, is really nice. \n\n * They nicely defined a set of \"linking functions\", e.g. metrics that express how a specific behavioral result is to be generated from the neural network. (Ie. the L1 metrics in results 1 and 3 and the information-theoretic measure in result 2.) The framework for setting up such linking functions seems like a great direction to me. \n\n * The actual psychophysical data seems to have been handled in a very careful and thoughtful way. These folks clearly know what they're doing on the psychophysical end. \n\n\nCons:\n * To my mind, the biggest problem wit this paper is that that it doesn't say something that we didn't really know already. Existing results have shown that DNNs are pretty good models of the human visual system in a whole bunch of ways, and this paper adds some more ways. What would have been great would be: \n (a) showing that they metric of comparison to humans that was sufficiently sensitive that it could pull apart various DNN models, making one clearly better than the others. \n (b) identifying a wide gap between the DNNs and the humans that is still unfilled. They sort of do this, since while the DNNs are good at reproducing the human judgements in Result 1, they are not perfect -- gap is between 60% explained variance and 84% inter-human consistency. This 24% gap is potentially important, so I'd really like to see them have explored that gap more -- e.g. (i) widening the gap by identifying which images caused the gap most and focusing a test on those, or (ii) closing the gap by training a neural network to get the pattern 100% correct and seeing if that made better CNNs as measured on other metrics/tasks. \n\nIn other words, I would definitely have traded off not having results 2 and 3 for a deeper exploration of result 1. I think their overall approach could be very fruitful, but it hasn't really been carried far enough here. \n\n * I found a few things confusing about the layout of the paper. I especially found that the quantitative results for results 2 and 3 were not clearly displayed. Why was figure 8 relegated to the appendix? Where are the quantifications of model-human similarities for the data shown in Figure 8? Isn't this the whole meat of their second result? This should really be presented in a more clear way. \n\n * Where is the quantification of model-human similarity for the data show in Figure 3? Isn't there a way to get the human contrast-sensitivity curve and then compare it to that of models in a more quantitively precise way, rather than just note a qualitative agreement? It seems odd to me that this wasn't done. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=H19W6GPVl
ByL97qNEg
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Review of \"HUMAN PERCEPTION IN COMPUTER VISION\"", "rating": "6: Marginally above acceptance threshold", "review": "This paper compares the performance, in terms of sensitivity to perturbations, of multilayer neural networks to human vision. In many of the tasks tested, multilayer neural networks exhibit similar sensitivities as human vision. \n\nFrom the tasks used in this paper one may conclude that multilayer neural networks capture many properties of the human visual system. But of course there are well known adversarial examples in which small, perceptually invisible perturbations cause catastrophic errors in categorization, so against that backdrop it is difficult to know what to make of these results. That the two systems exhibit similar phenomenologies in some cases could mean any number of things, and so it would have been nice to see a more in depth analysis of why this is happening in some cases and not others. For example, for the noise perturbations described in the the first section, one sees already that conv2 is correlated with human sensitivity. So why not examine how the first layer filters are being combined to produce this contextual effect? From that we might actually learn something about neural mechanisms.\n\nAlthough I like and am sympathetic to the direction the author is taking here, I feel it just scratches the surface in terms of analyzing perceptual correlates in multilayer neural nets. \n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=ByL97qNEg
ryhZ3-M4l
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "Nice paper, strong accept", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=ryhZ3-M4l
HJyf86bNx
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "A nice contribution to differentially-private deep learning", "rating": "9: Top 15% of accepted papers, strong accept", "review": "Altogether a very good paper, a nice read, and interesting. The work advances the state of the art on differentially-private deep learning, is quite well-written, and relatively thorough.\n\nOne caveat is that although the approach is intended to be general, no theoretical guarantees are provided about the learning performance. Privacy-preserving machine learning papers often analyze both the privacy (in the worst case, DP setting) and the learning performance (often under different assumptions). Since the learning performance might depend on the choice of architecture; future experimentation is encouraged, even using the same data sets, with different architectures. If this will not be added, then please justify the choice of architecture used, and/or clarify what can be generalized about the observed learning performance.\n\nAnother caveat is that the reported epsilons are not those that can be privately released; the authors note that their technique for doing so would change the resulting epsilon. However this would need to be resolved in order to have a meaningful comparison to the epsilon-delta values reported in related work.\n\nFinally, as has been acknowledged in the paper, the present approach may not work on other natural data types. Experiments on other data sets is strongly encouraged. Also, please cite the data sets used.\n\nOther comments:\n\nDiscussion of certain parts of the related work are thorough. However, please add some survey/discussion of the related work on differentially-private semi-supervised learning. For example, in the context of random forests, the following paper also proposed differentially-private semi-supervised learning via a teacher-learner approach (although not denoted as \u201cteacher-learner\u201d). The only time the private labeled data is used is when learning the \u201cprimary ensemble.\u201d A \"secondary ensemble\" is then learned only from the unlabeled (non-private) data, with pseudo-labels generated by the primary ensemble.\n\nG. Jagannathan, C. Monteleoni, and K. Pillaipakkamnatt: A Semi-Supervised Learning Approach to Differential Privacy. Proc. 2013 IEEE International Conference on Data Mining Workshops, IEEE Workshop on Privacy Aspects of Data Mining (PADM), 2013.\n\nSection C. does a nice comparison of approaches. Please make sure the quantitative results here constitute an apples-to-apples comparison with the GAN results. \n\nThe paper is extremely well-written, for the most part. Some places needing clarification include:\n- Last paragraph of 3.1. \u201call teachers\u2026.get the same training data\u2026.\u201d This should be rephrased to make it clear that it is not the same w.r.t. all the teachers, but w.r.t. the same teacher on the neighboring database.\n- 4.1: The authors state: \u201cThe number n of teachers is limited by a trade-off between the classification task\u2019s complexity and the available data.\u201d However, since this tradeoff is not formalized, the statement is imprecise. In particular, if the analysis is done in the i.i.d. setting, the tradeoff would also likely depend on the relation of the target hypothesis to the data distribution.\n- Discussion of figure 3 was rather unclear in the text and caption and should be revised for clarity. In the text section, at first the explanation seems to imply that a larger gap is better (as is also indicated in the caption). However later it is stated that the gap stays under 20%. These sentences seem contradictory, which is likely not what was intended.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=HJyf86bNx
HJNWD6Z4l
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "Good theory", "rating": "7: Good paper, accept", "review": "This paper discusses how to guarantee privacy for training data. In the proposed approach multiple models trained with disjoint datasets are used as ``teachers'' model, which will train a ``student'' model to predict an output chosen by noisy voting among all of the teachers. \n\nThe theoretical results are nice but also intuitive. Since teachers' result are provided via noisy voting, the student model may not duplicate the teacher's behavior. However, the probabilistic bound has quite a number of empirical parameters, which makes me difficult to decide whether the security is 100% guaranteed or not.\n\nThe experiments on MNIST and SVHN are good. However, as the paper claims, the proposed approach may be mostly useful for sensitive data like medical histories, it will be nice to conduct one or two experiments on such applications. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=HJNWD6Z4l
Hkes73e4g
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper presents a new, public dataset and tasks for goal-oriented dialogue applications. The dataset and tasks are constructed artificially using rule-based programs, in such a way that different aspects of dialogue system performance can be evaluated ranging from issuing API calls to displaying options, as well as full-fledged dialogue.\n\nThis is a welcome contribution to the dialogue literature, which will help facilitate future research into developing and understanding dialogue systems. Still, there are pitfalls in taking this approach. First, it is not clear how suitable Deep Learning models are for these tasks compared to traditional methods (rule-based systems or shallow models), since Deep Learning models are known to require many training examples and therefore performance difference between different neural networks may simply boil down to regularization techniques. The tasks 1-5 are also completely deterministic, which means evaluating performance on these tasks won't measure the ability of the models to handle noisy and ambiguous interactions (e.g. inferring a distribution over user goals, or executing dialogue repair strategies), which is a very important aspect in dialogue applications. Overall, I still believe this is an interesting direction to explore.\n\nAs discussed in the comments below, the paper does not have any baseline model with word order information. I think this is a strong weakness of the paper, because it makes the neural networks appear unreasonably strong, yet simpler baselines could very likely be be competitive (or better) than the proposed neural networks. To maintain a fair evaluation and correctly assess the power of representation learning for this task, I think it's important that the authors experiment with one additional non-neural network benchmark model which takes into account word order information. This would more convincly demonstrate the utility of Deep Learning models for this task. For example, the one could experiment with a logistic regression model which takes as input 1) word embeddings (similar to the Supervised Embeddings model), 2) bi-gram features, and 3) match-type features. If such a baseline is included, I will increase my rating to 8.\n\n\n\nFinal minor comment: in the conclusion, the paper states \"the existing work has no well defined measures of performances\". This is not really true. End-to-end trainable models for task-oriented dialogue have well-defined performance measures. See, for example \"A Network-based End-to-End Trainable Task-oriented Dialogue System\" by Wen et al. On the other hand, non-goal-oriented dialogue are generally harder to evaluate, but given human subjects these can also be evaluated. In fact, this is what Liu et al (2016) do for Twitter. See also, \"Strategy and Policy Learning for Non-Task-Oriented Conversational Systems\" by Yu et al.\n\n----\n\nI've updated my score following the new results added in the paper.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=Hkes73e4g
Bk118K4Ne
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Thought provoking paper, more on the metrics than the algorithms.", "rating": "8: Top 50% of accepted papers, clear accept", "review": "Attempts to use chatbots for every form of human-computer interaction has been a major trend in 2016, with claims that they could solve many forms of dialogs beyond simple chit-chat. This paper represents a serious reality check. While it is mostly relevant for Dialog/Natural Language venues (to educate software engineer about the limitations of current chatbots), it can also be published at Machine Learning venues (to educate researchers about the need for more realistic validation of ML applied to dialogs), so I would consider this work of high significance.\n\nTwo important conjectures are underlying this paper and likely to open to more research. While they are not in writing, Antoine Bordes clearly stated them during a NIPS workshop presentation that covered this work. Considering the metrics chosen in this paper:\n1)\tThe performance of end2end ML approaches is still insufficient for goal oriented dialogs.\n2)\tWhen comparing algorithms, relative performance on synthetic data is a good predictor of performance on natural data. This would be quite a departure from previous observations, but the authors made a strong effort to match the synthetic and natural conditions.\n\nWhile its original algorithmic contribution consists in one rather simple addition to memory networks (match type), it is the first time these are deployed and tested on a goal-oriented dialog, and the experimental protocol is excellent. The overall paper clarity is excellent and accessible to a readership beyond ML and dialog researchers. I was in particular impressed by how the short appendix on memory networks summarized them so well, followed by the tables that explained the influence of the number of hops.\n\nWhile this paper represents the state-of-the-art in the exploration of more rigorous metrics for dialog modeling, it also reminds us how brittle and somewhat arbitrary these remain. Note this is more a recommendation for future research than for revision.\n\nFirst they use the per-response accuracy (basically the next utterance classification among a fixed list of responses). Looking at table 3 clearly shows how absurd this can be in practice: all that matters is a correct API call and a reasonably short dialog, though this would only give us a 1/7 accuracy, as the 6 bot responses needed to reach the API call also have to be exact.\n\nWould the per-dialog accuracy, where all responses must be correct, be better? Table 2 shows how sensitive it is to the experimental protocol. I was initially puzzled that the accuracy for subtask T3 (0.0) was much lower that the accuracy for the full dialog T5 (19.7), until the authors pointed me to the tasks definitions (3.1.1) where T3 requires displaying 3 options while T5 only requires displaying one.\n\nFor the concierge data, what would happen if \u2018correct\u2019 meant being the best, not among the 5-best? \n\nWhile I cannot fault the authors for using standard dialog metrics, and coming up with new ones that are actually too pessimistic, I can think of one way to represent dialogs that could result in more meaningful metrics in goal oriented dialogs. Suppose I sell Virtual Assistants as a service, being paid upon successful completion of a dialog. What is the metric that would maximize my revenue? In this restaurant problem, the loss would probably be some weighted sum of the number of errors in the API call, the number of turns to reach that API call and the number of rejected options by the user. However, such as loss cannot be measured on canned dialogs and would either require a real human user or an realistic simulator\n\nAnother issue closely related to representation learning that this paper fails to address or explain properly is what happens if the vocabulary used by the user does not match exactly the vocabulary in the knowledge base. In particular, for the match type algorithm to code \u2018Indian\u2019 as \u2018type of cuisine\u2019, this word would have to occur exactly in the KB. I can imagine situations where the KB uses some obfuscated terminology, and we would like ML to learn the associations rather than humans to hand-describe them.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=Bk118K4Ne
rky-ix7Ee
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "SYNOPSIS:\nThis paper introduces a new dataset for evaluating end-to-end goal-oriented dialog systems. All data is generated in the restaurant setting, where the goal is to find availability and eventually book a table based on parameters provided by the user to the bot as part of a dialog. Data is generated by running a simulation using an underlying knowledge base to generate samples for the different parameters (cuisine, price range, etc), and then applying rule-based transformations to render natural language descriptions. The objective is to rank a set of candidate responses for each next turn of the dialog, and evaluation is reported in terms of per-response accuracy and per-dialog accuracy. The authors show that Memory Networks are able to improve over basic bag-of-words baselines.\n\nTHOUGHTS:\nI want to thank the authors for an interesting contribution. Having said that, I am skeptical about the utility of end-to-end trained systems in the narrow-domain setting. In the open-domain setting, there is a strong argument to be made that hand-coding all states and responses would not scale, and hence end-to-end trained methods make a lot of sense. However, in the narrow-domain setting, we usually know and understand the domain quite well, and the goal is to obtain high user satisfaction. Doesn't it then make sense in these cases to use the domain knowledge to engineer the best system possible?\n\nGiven that the domain is already restricted, I'm also a bit disappointed that the goal is to RANK instead of GENERATE responses, although I understand that this makes evaluation much easier. I'm also unsure how these candidate responses would actually be obtained in practice? It seems that the models rank the set of all responses in train/val/test (last sentence before Sec 3.2). Since a key argument for the end-to-end training approach is ease of scaling to new domains without having to manually re-engineer the system, where is this information obtained for a new domain in practice? Generating responses would allow much better generalization to new domains, as opposed to simply ranking some list of hand-collected generic responses, and in my mind this is the weakest part of this work.\n\nFinally, as data is generated using a simulation by expanding (cuisine, price, ...) tuples using NL-generation rules, it necessarily constrains the variability in the training responses. Of course, this is traded off with the ability to generate unlimited data using the simulator. But I was unable to see the list of rules that was used. It would be good to publish this as well.\n\nOverall, despite my skepticism, I think it is an interesting contribution worthy of publication at the conference. \n\n------\n\nI've updated my score following the clarifications and new results.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=rky-ix7Ee
BybRJGfNl
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Solid paper", "rating": "7: Good paper, accept", "review": "This paper proposed to use a simple count-based exploration technique in high-dimensional RL application (e.g., Atari Games). The counting is based on state hash, which implicitly groups (quantizes) similar state together. The hash is computed either via hand-designed features or learned features (unsupervisedly with auto-encoder). The new state to be explored receives a bonus similar to UCB (to encourage further exploration).\n\nOverall the paper is solid with quite extensive experiments. I wonder how it generalizes to more Atari games. Montezuma\u2019s Revenge may be particularly suitable for approaches that implicitly/explicitly cluster states together (like the proposed one), as it has multiple distinct scenarios, each with small variations in terms of visual appearance, showing clustering structures. On the other hand, such approaches might not work as well if the state space is fully continuous (e.g. in RLLab experiments). \n\nThe authors did not answer my question about why the hash code needs to be updated during training. I think it is mainly because the code still needs to be adaptive for a particular game (to achieve lower reconstruction error) in the first few iterations . After that stabilization is the most important. Sec. 2.3 (Learned embedding) is quite confusing (but very important). I hope that the authors could make it more clear (e.g., by writing an algorithm block) in the next version.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=BybRJGfNl
BJX3nErVg
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Final review: significant results in an important problem, but many moving parts", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes a new exploration scheme for reinforcement learning using locality-sensitive hashing states to build a table of visit counts which are then used to encourage exploration in the style of MBIE-EB of Strehl and Littman.\n\nSeveral points are appealing about this approach: first, it is quite simple compared to the current alternatives (e.g. VIME, density estimation and pseudo-counts). Second, the paper presents results across several domains, including classic benchmarks, continuous control domains, and Atari 2600 games. In addition, there are results for comparison from several other algorithms (DQN variants), many of which are quite recent. The results indicate that the approach clearly improves over the baseline. The results against other exploration algorithms are not as clear (more dependent on the individual domain/game), but I think this is fine as the appeal of the technique is its simplicity. Third, the paper presents results on the sensitivity to the granularity of the abstraction.\n\nI have only one main complaint, which is it seems there was some engineering involved to get this to work, and I do not have much confidence in the robustness of the conclusions. I am left uncertain as to how the story changes given slight perturbations over hyper-parameter values or enabling/disabling of certain choices. For example, how critical was using PixelCNN (or tying the weights?) or noisifying the output in the autoencoder, or what happens if you remove the custom additions to BASS? The granularity results show that the choice of resolution is sensitive, and even across games the story is not consistent.\n\nThe authors decide to use state-based counts instead of state-action based counts, deviating from the theory, which is odd because the reason to used LSH in the first place is to get closer to what MBIE-EB would advise via tabular counts. There are several explanations as to why state-based versus state-action based counts perform similarly in Atari; the authors do not offer any. Why?\n\nIt seems like the technique could be easily used in DQN as well, and many of the variants the authors compare to are DQN-based, so omitting DQN here again seems strange. The authors justify their choice of TRPO by saying it ensures safe policy improvement, though it is not clear that this is still true when adding these exploration bonuses.\n\nThe case study on Montezuma's revenge, while interesting, involves using domain knowledge and so does not really fit well with the rest of the paper.\n\nSo, in the end, simple and elegant idea to help with exploration tested in many domains, though I am not certain which of the many pieces are critical for the story to hold versus just slightly helpful, which could hurt the long-term impact of the paper.\n\n--- After response:\n\nThank you for the thorough response, and again my apologies for the late reply.\n\nI appreciate the follow-up version on the robustness of SimHash and state counting vs. state-action counting.\n\nThe paper addresses an important problem (exploration), suggesting a \"simple\" (compared to density estimation) counting method via hashing. It is a nice alternative approach to the one offered by Bellemare et al. If discussion among reviewers were possible, I would now try to assemble an argument to accept the paper. Specifically, I am not as concerned about beating the state of the art in Montezuma's as Reviewer3 as the merit of the current paper is one the simplicity of the hashing and on the wide comparison of domains vs. the baseline TRPO. This paper shows that we should not give up on simple hashing. There still seems to be a bunch of fiddly bits to get this to work, and I am still not confident that these results are easily reproducible. Nonetheless, it is an interesting new contrasting approach to exploration which deserves attention.\n\nNot important for the decision: The argument in the rebuttal concerning DQN & A3C is a bit of a straw man. I did not mention anything at all about A3C, I strictly referred to DQN, which is less sensitive to parameter-tuning than A3C. Also, Bellemare 2016 main result on Montezuma used DQN. Hence the omission of these techniques applied to DQN still seems a bit strange (for the Atari experiments). The figure S9 from Mnih et al. points to instances of asynchronous one-step Sarsa with varied thread counts.. of course this will be sensitive to parameters: it is both asynchronous online algorithms *and* the parameter varied is the thread count! This is hardly indicative of DQN's sensitivity to parameters, since DQN is (a) single-threaded (b) uses experience replay, leading to slower policy changes. Another source of stability, DQN uses a target network that changes infrequently. Perhaps the authors made a mistake in the reference graph in the figure? (I see no Figure 9 in https://arxiv.org/pdf/1602.01783v2.pdf , I assume the authors meant Figure S9)", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=BJX3nErVg
rkK1pXKNx
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Review", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces a new way of extending the count based exploration approach to domains where counts are not readily available. The way in which the authors do it is through hash functions. Experiments are conducted on several domains including control and Atari. \n\nIt is nice that the authors confirmed the results of Bellemare in that given the right \"density\" estimator, count based exploration can be effective. It is also great the observe that given the right features, we can crack games like Montezuma's revenge to some extend.\n\nI, however, have several complaints:\n\nFirst, by using hashing, the authors did not seem to be able to achieve significant improvements over past approaches. Without \"feature engineering\", the authors achieved only a fraction of the performance achieved in Bellemare et al. on Montezuma's Revenge. The proposed approaches In the control domains, the authors also does not outperform VIME. So experimentally, it is very hard to justify the approach. \n\nSecond, hashing, although could be effective in the domains that the authors tested on, it may not be the best way of estimating densities going forward. As the environments get more complicated, some learning methods, are required for the understanding of the environments instead of blind hashing. The authors claim that the advantage of the proposed method over Bellemare et al. is that one does not have to design density estimators. But I would argue that density estimators have become readily available (PixelCNN, VAEs, Real NVP, GANs) that they can be as easily applied as can hashing. Training the density estimators is not difficult problem as more.\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=rkK1pXKNx
BkxN0nr4l
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Optimization of a ReLU network under new assumptions", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This work analyzes the continuous-time dynamics of gradient descent when training two-layer ReLU networks (one input, one output, thus only one layer of ReLU units). The work is interesting in the sense that it does not involve some unrealistic assumptions used by previous works with similar goal. Most importantly, this work does not assume independence between input and activations, and it does not rely on noise injection (which can simplify the analysis). Nonetheless, removing these simplifying assumptions comes at the expense of limiting the analysis to:\n1. Only one layer of nonlinear units\n2. Discarding the bias term in ReLU while keeping the input Gaussian (thus constant input trick cannot be used to simulate the bias term).\n3. Imposing strong assumption on the representation on the input/output via (bias-less) ReLU networks: existence of orthonormal bases to represent this relationships.\n\nHaving that said, as far as I can tell, the paper presents original analysis in this new setting, which is interesting and valuable. For example, by exploiting the symmetry in the problem under the assumption 3 I listed above, the authors are able to reduce the high-dimensional dynamics of the gradient descent to a bivariate dynamics (instead of dealing with original size of the parameters). Such reduction to 2D allows the author to rigorously analyze the behavior of the dynamics (e.g. convergence to a saddle point in symmetric case, or to the optimum in non-symmetric case).\n\nClarification Needed: first paragraph of page 2. Near the end of the paragraph you say \"Initialization can be arbitrarily close to origin\", but at the beginning of the same paragraph you state \"initialized randomly with standard deviation of order 1/sqrt(d)\". Aren't these inconsistent?\n\nSome minor comments about the draft:\n1. In section 1, 2nd paragraph: \"We assume x is Gaussian and thus the network is bias free\". Do you mean \"zero-mean\" Gaussian then?\n2. \"standard deviation\" is spelled \"standard derivation\" multiple times in the paper.\n3. Page 6, last paragraph, first line: Corollary 4.1 should be Corollary 4.2\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=BkxN0nr4l
SJVUCuuNg
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Potentially new analysis, but hard to read", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice. \n\nThe paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually. \n\nOverall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=SJVUCuuNg
HkAvHKxNl
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Hard to read paper; unclear conclusions.", "rating": "4: Ok but not good enough - rejection", "review": "In this paper, the author analyzes the convergence dynamics of a single layer non-linear network under Gaussian iid input assumptions. The first half of the paper, dealing with a single hidden node, was somewhat clear, although I have some specific questions below. The second half, dealing with multiple hidden nodes, was very difficult for me to understand, and the final \"punchline\" is quite unclear. I think the author should focus on intuition and hide detailed derivations and symbols in an appendix. \n\nIn terms of significance, it is very hard for me to be sure how generalizable these results are: the Gaussian assumption is a very strong one, and so is the assumption of iid inputs. Real-world feature inputs are highly correlated and are probably not Gaussian. Such assumptions are not made (as far as I can tell) in recent papers analyzing the convergence of deep networks e.g. Kawaguchi, NIPS 2016. Although the author says the no assumption is made on the independence of activations, this assumption is shifted to the input instead. I think this means that the activations are combinations of iid random variables, and are probably Gaussian like, right? So I'm not sure where this leaves us.\n\nSpecific comments:\n\n1. Please use D_w instead of D to show that D is a function of w, and not a constant. This gets particularly confusing when switching to D(w) and D(e) in Section 3. In general, notation in the paper is hard to follow and should be clearly introduced.\n\n2. Section 3, statement that says \"when the neuron is cut off at sample l, then (D^(t))_u\" what is the relationship between l and u? Also, this is another example of notational inconsistency that causes problems to the reader.\n\n3. Section 3.1, what is F(e, w) and why is D(e) introduced? This was unclear to me.\n\n4. Theorem 3.3 suggests that (if \\epsilon is > 0), then to have the maximal probability of convergence, \\epsilon should be very close to 0, which means that the ball B_r has radius r -> 0? This seems contradictory from Figure 2. \n\n5. Section 4 was really unclear and I still do not understand what the symmetry group really represents. Is there an intuitive explanation why this is important?\n\n6. Figure 5: what is a_j ?\n\nI encourage the author to rewrite this paper for clarity. In it's present form, it would be very difficult to understand the takeaways from the paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=HkAvHKxNl
rkYg2xjEg
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Why benchmark techniques for IoT on a Xeon?", "rating": "6: Marginally above acceptance threshold", "review": "Dyvedeep presents three approximation techniques for deep vision models aimed at improving inference speed.\nThe techniques are novel as far as I know.\nThe paper is clear, the results are plausible.\n\nThe evaluation of the proposed techniques is does not make a compelling case that someone interested in faster inference would ultimately be well-served by a solution involving the proposed methods.\n\nThe authors delineate \"static\" acceleration techniques (e.g. reduced bit-width, weight pruning) from \"dynamic\" acceleration techniques which are changes to the inference algorithm itself. The delineation would be fine if the use of each family of techniques were independent of the other, but this is not the case. For example, the use of SPET would, I think, conflict with the use of factored weight matrices (I recall this from http://papers.nips.cc/paper/5025-predicting-parameters-in-deep-learning.pdf, but I suspect there may be more recent work). For this reason, a comparison between SPET and factored weight matrices would strengthen the case that SPET is a relevant innovation. In favor of the factored-matrix approach, there would I think be fewer hyperparameters and the computations would make more-efficient use of blocked linear algebra routines--the case for the superiority of SPET might be difficult to make.\n\nThe authors also do not address their choice of the Xeon for benchmarking, when the use cases they identify in the introduction include \"low power\" and \"deeply embedded\" applications. In these sorts of applications, a mobile GPU would be used, not a Xeon. A GPU implementation of a convnet works differently than a CPU implementation in ways that might reduce or eliminate the advantage of the acceleration techniques put forward in this paper.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=rkYg2xjEg
BkLHl2ZEe
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Interesting ideas, but I'm not sure about the significance.", "rating": "7: Good paper, accept", "review": "This work proposes a number of approximations for speeding up feed-forward network computations at inference time. Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example. \n\nSpeeding up inference is an important problem and this work takes a novel approach. The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good. This is a high-quality paper. I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer. \n\nMy main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations. In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization. I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches. Thus, I am not sure how much of an impact this approach can have in it's current state.\n\nSuggestions:\n-I wondered what (if any) bounds could be made on the approximation errors of the proposed methods?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=BkLHl2ZEe
H1nMEJZ4g
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Interesting and clearly written paper. My main concerns about this paper, are about the novelty, and the advantages of the proposed techniques over related papers in the area.", "rating": "6: Marginally above acceptance threshold", "review": "The authors describe a series of techniques which can be used to reduce the total amount of computation that needs to be performed in Deep Neural Networks. The authors propose to selectively identify how important a certain set of computations is to the final DNN output, and to use this information to selectively skip certain computations in the network. As deep learning technologies become increasingly widespread on mobile devices, techniques which enable efficient inference on such devices are becoming increasingly important for practical applications. \n\nThe paper is generally well-written and clear to follow. I had two main comments that concern the experimental design, and the relationship to previous work:\n\n1. In the context of deployment on mobile devices, computational costs in terms of both system memory as well as processing are important consideration. While the proposed techniques do improve computational costs, they don\u2019t reduce model size in terms of total number of parameters. Also, the gains obtained using the proposed method appear to be similar to other works that do allow for improvements in terms of both memory and computation (see, e.g., (Han et al., 2015)). It would have been interesting if the authors had reported results when the proposed techniques were applied to models that have been compressed in size as well.\n\nS. Han, H. Mao, and W. J. Dally. \"Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding.\" arXiv prepring arXiv:1510.00149 (2015).\n\n2. The SDSS technique in the paper appears to be very similar to the \u201cPerforated CNN\u201d technique proposed by Figurnov et al. (2015). In that work, as in the authors work, CNN activations are approximated by interpolating responses from neighbors. The authors should comment on the similarity and differences between the proposed method and the referenced work.\n\nFigurnov, Michael, Dmitry Vetrov, and Pushmeet Kohli. \"Perforatedcnns: Acceleration through elimination of redundant convolutions.\" arXiv preprint arXiv:1504.08362 (2015).\n\nOther minor comments appear below:\n\n3. A clarification question: In comparing the proposed methods to the baseline, in Section 4, the authors mention that they used their own custom implementation. However, do the baselines use the same custom implementation, or do they used the optimized BLAS libraries?\n\n4. The authors should also consider citing the following additional references:\n * S. Tan and K. C. Sim, \"Towards implicit complexity control using variable-depth deep neural networks for automatic speech recognition,\" 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 2016, pp. 5965-5969.\n * Graves, Alex. \"Adaptive Computation Time for Recurrent Neural Networks.\" arXiv preprint arXiv:1603.08983 (2016).\n\n5. Please explain what the Y-axis in Figure 7 represents in the text.\n\n6. Typographical Error: Last paragraph of Section 2: \u201c... are qualitatively different the aforementioned ...\u201d \u2192 \u201c... are qualitatively different from the aforementioned ...\u201d", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=H1nMEJZ4g
B17yL74He
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Poor performance on bioinformatics dataset?", "rating": "5: Marginally below acceptance threshold", "review": "the paper proposed a method mainly for graph classification. The proposal is to decompose graphs objects into hierarchies of small graphs followed by generating vector embeddings and aggregation using deep networks. \nThe approach is reasonable and intuitive however, experiments do not show superiority of their approach. \n\nThe proposed method outperforms Yanardag et al. 2015 and Niepert et al., 2016 on social networks graphs but are quite inferior to Niepert et al., 2016 on bio-informatics datasets. the authors did not report acccuracy for Yanardag et al. 2015 which on similar bio-ddatasets for example NCI1 is 80%, significantly better than achieved by the proposed method. The authors claim that their method is tailored for social networks graph more is not supported by good arguments? what models of graphs is this method more suitable? ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=B17yL74He
r1xXahBNl
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Interesting approach, confusing presentation.", "rating": "5: Marginally below acceptance threshold", "review": "The paper contributes to recent work investigating how neural networks can be used on graph-structured data. As far as I can tell, the proposed approach is the following:\n\n 1. Construct a hierarchical set of \"objects\" within the graph. Each object consists of multiple \"parts\" from the set of objects in the level below. There are potentially different ways a part can be part of an object (the different \\pi labels), which I would maybe call \"membership types\". In the experiments, the objects at the bottom level are vertices. At the next level they are radius 0 (just a vertex?) and radius 1 neighborhoods around each vertex, and the membership types here are either \"root\", or \"element\" (depending on whether a vertex is the center of the neighborhood or a neighbor). At the top level there is one object consisting of all of these neighborhoods, with membership types of \"radius 0 neighborhood\" (isn't this still just a vertex?) or \"radius 1 neighborhood\".\n\n 2. Every object has a representation. Each vertex's representation is a one-hot encoding of its degree. To construct an object's representation at the next level, the following scheme is employed:\n\n a. For each object, sum the representation of all of its parts having the same membership type.\n b. Concatenate the sums obtained from different membership types.\n c. Pass this vector through a multi-layer neural net.\n\nI've provided this summary mainly because the description in the paper itself is somewhat hard to follow, and relevant details are scattered throughout the text, so I'd like to verify that my understanding is correct.\n\nSome additional questions I have that weren't clear from the text: how many layers and hidden units were used? What are the dimensionalities of the representations used at each layer? How is final classification performed? What is the motivation for the chosen \"ego-graph\" representation? \n\nThe proposed approach is interesting and novel, the compression technique appears effective, and the results seem compelling. However, the clarity and structure of the writing is quite poor. It took me a while to figure out what was going on---the initial description is provided without any illustrative examples, and it required jumping around the paper to figure for example how the \\pi labels are actually used. Important details around network architecture aren't provided, and very little in the way of motivation is given for many of the choices made. Were other choices of decomposition/object-part structures investigated, given the generality of the shift-aggregate-extract formulation? What motivated the choice of \"ego-graphs\"? Why one-hot degrees for the initial attributes?\n\nOverall, I think the paper contains a useful contribution on a technical level, but the presentation needs to be significantly cleaned up before I can recommend acceptance.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=r1xXahBNl
SJP14kfEx
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Might be something good here, but key details are missing.", "rating": "3: Clear rejection", "review": "Some of the key details in this paper are very poorly explained or not even explained at all. The model sounds interesting and there may be something good here, but it should not be published in it's current form. \n\nSpecific comments:\n\nThe description of the R_l,pi convolutions in Section 2.1 was unclear. Specifically, I wasn't confident that I understood what the labels pi represented.\n\nThe description of the SAEN structure in section 2.2 was worded poorly. My understanding, based on Equation 1, is that the 'shift' operation is simply a summation of the representations of the member objects, and that the 'aggregate' operation simply concatenates the representations from multiple relations. In the 'shift' step, it seems more appropriate to average over the object's member's representations h_j, rather than sum over them.\n\nThe compression technique presented in Section 2.3 requires that multiple objects at a level have the same representation. Why would this ever occur, given that the representations are real valued and high-dimensional? The text is unintelligible: \"two objects are equivalent if they are made by same sets of parts for all the pi-parameterizations of the R_l,pi decomposition relation.\" \n\nThe 'ego graph patterns' in Figure 1 and 'Ego Graph Neural Network' used in the experiments are never explained in the text, and no references are given. Because of this, I cannot comment on the quality of the experiments.", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=SJP14kfEx
BJ_0DiWNx
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "Limited theoretical novelty and evaluation", "rating": "5: Marginally below acceptance threshold", "review": "Authors show that a contrastive loss for a Siamese architecture can be used for learning representations for planar curves. With the proposed framework, authors are able to learn a representation which is comparable to traditional differential or integral invariants, as evaluated on few toy examples.\n\nThe paper is generally well written and shows an interesting application of the Siamese architecture. However, the experimental evaluation and the results show that these are rather preliminary results as not many of the choices are validated. My biggest concern is in the choice of the negative samples, as the network basically learns only to distinguish between shapes at different scales, instead of recognizing different shapes. It is well known fact that in order to achieve a good performance with the contrastive loss, one has to be careful about the hard negative sampling, as using too easy negatives may lead to inferior results. Thus, this may be the underlying reason for such choice of the negatives? Unfortunately, this is not discussed in the paper.\n\nFurthermore the paper misses a more thorough quantitative evaluation and concentrates more on showing particular examples, instead of measuring more robust statistics over multiple curves (invariance to noise and sampling artifacts).\n\nIn general, the paper shows interesting first steps in this direction, however it is not clear whether the experimental section is strong and thorough enough for the ICLR conference. Also the novelty of the proposed idea is limited as Siamese networks are used for many years and this work only shows that they can be applied to a different task.", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=BJ_0DiWNx
HJehdh-4e
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "filling a much needed gap?", "rating": "6: Marginally above acceptance threshold", "review": "I'm torn on this one. Seeing the MPEG-7 dataset and references to curvature scale space brought to mind the old saying that \"if it's not worth doing, it's not worth doing well.\" There is no question that the MPEG-7 dataset/benchmark got saturated long ago, and it's quite surprising to see it in a submission to a modern ML conference. I brought up the question of \"why use this representation\" with the authors and they said their \"main purpose was to connect the theory of differential geometry of curves with the computational engine of a convolutional neural network.\" Fair enough. I agree these are seemingly different fields, and the authors deserve some credit for connecting them. If we give them the benefit of the doubt that this was worth doing, then the approach they pursue using a Siamese configuration makes sense, and their adaptation of deep convnet frameworks to 1D signals is reasonable. To the extent that the old invariant based methods made use of smoothed/filtered representations coupled with nonlinearities, it's sensible to revisit this problem using convnets. I wouldn't mind seeing this paper accepted, since it's different from the mainstream, but I worry about there being too narrow an audience at ICLR that still cares about this type of shape representation.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=HJehdh-4e
B10ljK-Nl
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "An interesting representation", "rating": "8: Top 50% of accepted papers, clear accept", "review": "Pros : \n- New representation with nice properties that are derived and compared with a mathematical baseline and background\n- A simple algorithm to obtain the representation\n\nCons :\n- The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=B10ljK-Nl
B1-0khZEl
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "Very interesting results, but more details and more quantitative results are needed", "rating": "6: Marginally above acceptance threshold", "review": "\nThis paper proposes the beta-VAE, which is a reasonable but also straightforward generalization of the standard VAE. In particular, a weighting factor beta is added for the KL-divergence term to balance the likelihood and KL-divergence. Experimental results show that tuning this weighting factor is important for learning disentangled representations. A linear-classifier based protocol is proposed for measuring the quality of disentanglement. Impressive illustrations on manipulating latent variables are shown in the paper. \n\nLearning disentangled representations without supervision is an important topic. Showing the effectiveness of VAE for this task is interesting. Generalizing VAE with a weighting factor is straightforward (though reformulating VAE is also interesting), the main contribution of this paper is on the empirical side. \n\nThe proposed protocol for measuring disentangling quality is reasonable. Establishing protocol is one important methodology contribution of this paper, but the presentation of Section 3 is still not good. Little motivation is provided at the beginning of Section 3. Figure 2 is a summary of the algorithm, which is helpful, but it still necessary to intuitively explain the motivation at the first place (e.g., what you expect if a factor is disentangled, and why the performance of a classifier can reflect such an expectation). Moreover, 1) z_diff appeared without any definition in the main text. 2) Use \u201cdecoding\u201d for x~Sim(v,w) may make people confuse the ground truth sampling procedure w ith the trained decoder. \n\nThe illustrative figures on traversing the disentangled factor are impressive, though image generation quality is not as good as InfoGAN (not the main point of this paper). However, 1) it will be helpful to discuss if the good disentangling quality only attribute to the beta factor and VAE framework. For example, the training data in this paper seems to be densely sampled for the visualized factors. Does the sampling density play a critical role? 2) Not too many qualitative results are provided for each experiment? Adding more figures (e.g., in appendix) to cover more factors and seeding images can strength the conclusions drawn in this paper. 3) Another detailed question related to the generalizability of the model: are the seeding image for visualizing faces from unseen subjects or subjects in the training set? (maybe I missed something here.)\n\nQuantitative results are only presented for the synthesized 2D shape. What hinders this paper from reporting quantitative numbers on real data (e.g., the 2D and 3D face data)? One possible reason is that not all factors can be disentangled for real data, but it is still feasible to pick up some well-defined factor to measure the quantitative performance. \n\nQuantitative performance is only measured by the proposed protocol. Since the effectiveness of the protocol is something the paper need to justify, reporting quantitative results using simpler protocol is helpful both for demonstrating the disentangling quality and for justifying the proposed protocol (consistency with other measurement). A simple experiment is facial identity recognition and pose estimation using disentangled features on a standard test set (like in Reed et al, ICML 2014). \n\nIn Figure 6 (left), why ICA is worse than PCA for disentanglement? Is it due to the limitation of the ICA algorithm or some other reasons? \n\nIn Figure 6 (right), what is \u201cfactor change accuracy\u201d? According to Appendix A.4 (which is not referred to in the main text), it is the \u201cDisentanglement metric score\u201d. Is that right?\nIf so Figure 6 (right) shows the reconstruction results for the best disentanglement metric score. Then, 1) how about random generation or traversing along a disentangled factor? 2) more importantly, how is the reconstruction/generation results when the disentanglement metric score is suboptimal. \n\nOverall, the results presented in this paper are very interesting, but there are many details to be clarified. Moreover, more quantitative results are also needed. I hope at least some of the above concerns can be addressed. \n\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=B1-0khZEl
H16z7IT4l
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "The paper proposes beta-VAE which strengthen the KL divergence between the recognition model and the prior to limit the capacity of latent variables while sacrificing the reconstruction error. This allows the VAE model to learn more disentangled representation. \n\nThe main concern is that the paper didn't present any quantitative result on log likelihood estimation. On the quality of generated samples, although the beta-VAE learns disentangled representation, the generated samples are not as realistic as those based on generative adversarial network, e.g., InfoGAN. Beta-VAE learns some interpretable factors of variation, but it still remains unclear why it is a better (or more efficient) representation than that of standard VAE.\n\nIn experiment, what is the criteria for cross-validation on hyperparameter \\beta?\n\nThere also exists other ways to limit the capacity of the model. The simplest way is to reduce the latent variable dimension. I am wondering how the proposed beta-VAE is a better model than the VAE with reduced, or optimal latent variable dimension.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=H16z7IT4l
HyRZoSLVe
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "Simple and effective", "rating": "7: Good paper, accept", "review": "Summary\n===\n\nThis paper presents Beta-VAE, an augmented Variational Auto-Encoder which\nlearns disentangled representations. The VAE objective is derived\nas an approximate relaxation of a constrained optimization problem where\nthe constraint matches the latent code of the encoder to a prior.\nWhen KKT multiplier beta on this constraint is set to 1 the result is the\noriginal VAE objective, but when beta > 1 we obtain Beta-VAE, which simply\nincreases the penalty on the KL divergence term. This encourages the model to\nlearn a more efficient representation because the capacity of the latent\nrepresentation is more limited by beta. The distribution of the latent\nrepresentation is rewarded more when factors are independent because\nthe prior (an isotropic Gaussian) encourages independent factors, so the\nrepresentation should also be disentangled.\n\nA new metric is proposed to evaluate the degree of disentanglement. Given\na setting in which some disentangled latent factors are known, many examples\nare generated which differ in all of these factors except one. These examples\nare encoded into the learned latent representation and a simple classifier\nis used to predict which latent factor was kept constant. If the learned\nrepresentation does not disentangle the constant factor then the classifier\nwill more easily confuse factors and its accuracy will be lower. This\naccuracy is the final number reported.\n\nA synthetic dataset of 2D shapes with known latent factors is created to\ntest the proposed metric and Beta-VAE outperforms a number of baselines\n(notably InfoGAN and the semi-supervised DC-IGN).\n\nQualitative results show that Beta-VAE learns disentangled factors\non the 3D chairs dataset, a dataset of 3D faces, and the celebA dataset\nof face images. The effect of varying Beta is also evaluated using the proposed\nmetric and the latent factors learned on the 2D shapes dataset are explored\nin detail.\n\n\nStrengths\n===\n* Beta-VAE is simple and effective.\n\n* The proposed metric is a novel way of testing whether ground truth factors\nof variation have been identified.\n\n* There is extensive comparison to relevant baselines.\n\n\nWeaknesses\n===\n\n* Section 3 describes the proposed disentanglement metric, however I feel\nI need to read the caption of the associated figure (I thank for adding\nthat) and Appendix 4 to understand the metric intuitively or in detail.\nIt would be easier to read this section if a clear intuition preceeded\na detailed description and I think more space should be devoted to this\nin the paper.\n\n* Appendix 4: Why was the bottom 50% of the resulting scores discarded?\n\n* As indicated in pre-review comments, the disentanglement metric is similar\nto a measure of correlation between latent features. Could the proposed metric\nbe compared to a direct measure of cross-correlation between latent factors\nestimated over the 2D shapes dataset?\n\n\n* The end of section 4.2 observes that high beta values result in low\ndisentanglement, which suggests the most efficient representation is not\ndisentangled. This seems to disagree with the intuition from the approach\nsection that more efficient representations should be disentangled. It would\nbe nice to see discussion of potential reasons for this disagreement.\n\n* The writing is somewhat dense.\n\n\nOverall Evaluation\n===\nThe core idea is novel, simple and extensive tests show that it is effective.\nThe proposed evaluation metric is novel might come into broader use.\nThe main downside to the current version of this paper is the presentation,\nwhich provides sufficient detail but could be more clear.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=HyRZoSLVe
rkCS99SVl
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "official review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper proposes to use reinforcement learning to learn how to compose the words in a sentence, i.e. parse tree, that can be helpful for the downstream tasks. To do that, the shift-reduce framework is employed and RL is used to learn the policy of the two actions SHIFT and REDUCE. The experiments on four datasets (SST, SICK, IMDB, and SNLI) show that the proposed approach outperformed the approach using predefined tree structures (e.g. left-to-right, right-to-left). \n\nThe paper is well written and has two good points. Firstly, the idea of using RL to learn parse trees using downstream tasks is very interesting and novel. And employing the shift-reduce framework is a very smart choice because the set of actions is minimal (shift and reduce). Secondly, what shown in the paper somewhat confirms the need of parse trees. This is indeed interesting because of the current debate on whether syntax is helpful.\n\nI have the following comments:\n- it seems that the authors weren't aware of some recent work using RL to learn structures for composition, e.g. Andreas et al (2016).\n- because different composition functions (e.g. LSTM, GRU, or classical recursive neural net) have different inductive biases, I was wondering if the tree structures found by the model would be independent from the composition function choice.\n- because RNNs in theory are equivalent to Turing machines, I was wondering if restricting the expressiveness of the model (e.g. reducing the dimension) can help the model focus on discovering more helpful tree structures.\n\nRef:\nAndreas et al. Learning to Compose Neural Networks for Question Answering. NAACL 2016", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=rkCS99SVl
r19SqUiNe
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "Accept", "rating": "7: Good paper, accept", "review": "I have not much to add to my pre-review comments.\nIt's a very well written paper with an interesting idea.\nLots of people currently want to combine RL with NLP. It is very en vogue.\nNobody has gotten that to work yet in any really groundbreaking or influential way that results in actually superior performance on any highly relevant or competitive NLP task.\nMost people struggle with the fact that NLP requires very efficient methods on very large datasets and RL is super slow.\nHence, I believe this direction hasn't shown much promise yet and it's not yet clear it ever will due to the slowness of RL.\nBut many directions need to be explored and maybe eventually they will reach a point where they become relevant.\n\nIt is interesting to learn the obviously inherent grammatical structure in language though sadly again, the trees here do not yet capture much of what our intuitions are.\n\nRegardless, it's an interesting exploration, worthy of being discussed at the conference.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=r19SqUiNe
B1OyMaWNg
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "Weak experimental results", "rating": "6: Marginally above acceptance threshold", "review": "In this paper, the authors propose a new method to learn hierarchical representations of sentences, based on reinforcement learning. They propose to learn a neural shift-reduce parser, such that the induced tree structures lead to good performance on a downstream task. They use reinforcement learning (more specifically, the policy gradient method REINFORCE) to learn their model. The reward of the algorithm is the evaluation metric of the downstream task. The authors compare two settings, (1) no structure information is given (hence, the only supervision comes from the downstream task) and (2) actions from an external parser is used as supervision to train the policy network, in addition to the supervision from the downstream task. The proposed approach is evaluated on four tasks: sentiment analysis, semantic relatedness, textual entailment and sentence generation.\n\nI like the idea of learning tree representations of text which are useful for a downstream task. The paper is clear and well written. However, I am not convinced by the experimental results presented in the paper. Indeed, on most tasks, the proposed model is far from state-of-the-art models:\n - sentiment analysis, 86.5 v.s. 89.7 (accuracy);\n - semantic relatedness, 0.32 v.s. 0.25 (MSE);\n - textual entailment, 80.5 v.s. 84.6 (accuracy).\nFrom the results presented in the paper, it is hard to know if these results are due to the model, or because of the reinforcement learning algorithm.\n\nPROS:\n - interesting idea: learning structures of sentences adapted for a downstream task.\n - well written paper.\nCONS:\n - weak experimental results (do not really support the claim of the authors).\n\nMinor comments:\nIn the second paragraph of the introduction, one might argue that bag-of-words is also a predominant approach to represent sentences.\nParagraph titles (e.g. in section 3.2) should have a period at the end.\n\n----------------------------------------------------------------------------------------------------------------------\nUPDATE\n\nI am still not convinced by the results presented in the paper, and in particular by the fact that one must combine the words in a different way than left-to-right to obtain state of the art results.\nHowever, I do agree that this is an interesting research direction, and that the results presented in the paper are promising. I am thus updating my score from 5 to 6.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=B1OyMaWNg
Hyq3zhbVg
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Review", "rating": "3: Clear rejection", "review": "The paper proposes a model that aims at learning to label nodes of graph in a semi-supervised setting. The idea of the model is based on the use of the graph structure to regularize the representations learned at the node levels. Experimental results are provided on different tasks\n\nThe underlying idea of this paper (graph regularization) has been already explored in different papers \u2013 e.g 'Learning latent representations of nodes for classifying in heterogeneous social networks' [Jacob et al. 2014], [Weston et al 2012] where a real graph structure is used instead of a built one. The experiments lack of strong comparisons with other graph models (e.g Iterative Classification, 'Learning from labeled and unlabeled data on a directed graph', ...). So the novelty of the paper and the experimental protocol are not strong enough to accpet the paper.\n\nPros:\n* Learning over graph is an important topic\n\nCons:\n* Many existing approaches have already exploited the same types of ideas, resulting in very close models\n* Lack of comparison w.r.t existing models\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=Hyq3zhbVg
SkitQvmNl
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Very similar to previous work, rebranded.", "rating": "3: Clear rejection", "review": "The authors introduce a semi-supervised method for neural networks, inspired from label propagation.\n\nThe method appears to be exactly the same than the one proposed in (Weston et al, 2008) (the authors cite the 2012 paper). The optimized objective function in eq (4) is exactly the same than eq (9) in (Weston et al, 2008).\n\nAs possible novelty, the authors propose to use the adjacency matrix as input to the neural network, when there are no other features, and show success on the BlogCatalog dataset.\n\nExperiments on text classification use neighbors according to word2vec average embedding to build the adjacency matrix. Top reported accuracies are not convincing compared to (Zhang et al, 2015) reported performance. Last experiment is on semantic intent classification, which a custom dataset; neighbors are also found according to a word2vec metric.\n\nIn summary, the paper propose few applications to the original (Weston et al, 2008) paper. It rebrands the algorithm under a new name, and does not bring any scientific novelty, and the experimental section lacks existing baselines to be convincing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=SkitQvmNl
BJofT1mNg
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Very similar to previous work.", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposes the Neural Graph Machine that adds in graph regularization on neural network hidden representations to improve network learning and take the graph structure into account. The proposed model, however, is almost identical to that of Weston et al. 2012.\n\nAs the authors have clarified in the answers to the questions, there are a few new things that previous work did not do:\n\n1. they showed that graph augmented training for a range of different types of networks, including FF, CNN, RNNs etc. and works on a range of problems.\n2. graphs help to train better networks, e.g. 3 layer CNN with graphs does as well as than 9 layer CNNs\n3. graph augmented training works on a variety of different kinds of graphs.\n\nHowever, all these points mentioned above seems to simply be different applications of the graph augmented training idea, and observations made during the applications. I think it is therefore not proper to call the proposed model a novel model with a new name Neural Graph Machine, but rather making it clear in the paper that this is an empirical study of the model proposed by Weston et al. 2012 to different problems would be more acceptable.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=BJofT1mNg
S1nGIQ-Vl
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "O(mn)?", "rating": "4: Ok but not good enough - rejection", "review": "L-SR1 seems to have O(mn) time complexity. I miss this information in your paper. \nYour experimental results suggest that L-SR1 does not outperform Adadelta (I suppose the same for Adam). \nGiven the time complexity of L-SR1, the x-axis showing time would suggest that L-SR1 is much (say, m times) slower. \n\"The memory size of 2 had the lowest minimum test loss over 90\" suggests that the main driven force of L-SR1 \nwas its momentum, i.e., the second-order information was rather useless.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=S1nGIQ-Vl
rk3f2SyVg
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "Address better optimization at saddle points with symmetric rank-one method which does not guarantee pos. def. update matrix, vs. BFGS approach. Investigating this optimization with limited memory version or SR1", "rating": "5: Marginally below acceptance threshold", "review": "It is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian. From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=rk3f2SyVg
SyNjWlG4x
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "Interesting work, but not ready to be published", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes a new second-order method L-SR1 to train deep neural networks. It is claimed that the method addresses two important optimization problems in this setting: poor conditioning of the Hessian and proliferation of saddle points. The method can be viewed as a concatenation of SR1 algorithm of Nocedal & Wright (2006) and limited-memory representations Byrd et al. (1994). First of all, I am missing a more formal, theoretical argument in this work (in general providing more intuition would be helpful too), which instead is provided in the works of Dauphin (2014) or Martens. The experimental section in not very convincing considering that the performance in terms of the wall-clock time is not reported and the advantage over some competitor methods is not very strong even in terms of epochs. I understand that the authors are optimizing their implementation still, but the question is: considering the experiments are not convincing, why would anybody bother to implement L-SR1 to train their deep models? The work is not ready to be published.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=SyNjWlG4x
S1Jpha-Vl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them.\n\nReward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when?\n\n\u201crivalry\u201d training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don\u2019t think that you really invented \u201ca new method to train an agent by enabling it to train against several opponents\u201d nor \u201ca new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI\u201d). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite.\n\nYour definition of Q-function (\u201cpredicts the score at the end of the game given the current state and selected action\u201d) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy).\n\nMinor:\n* Eq (1): the Q-net inside the max() is the target network, with different parameters theta\u2019\n* the Du et al. reference is missing the year\n* some of the other references should point at the corresponding published papers instead of the arxiv versions", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=S1Jpha-Vl
H1f6QHHVl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "Final review: Nice software contribution, expected more significant scientific contributions", "rating": "5: Marginally below acceptance threshold", "review": "The paper presents a new environment, called Retro Learning Environment (RLE), for reinforcement learning. The authors focus on Super Nintendo but claim that the interface supports many others (including ALE). Benchmark results are given for standard algorithms in 5 new Super Nintendo games, and some results using a new \"rivalry metric\".\n\nThese environments (or, more generally, standardized evaluation methods like public data sets, competitions, etc.) have a long history of improving the quality of AI and machine learning research. One example in the past few years was the Atari Learning Environment (ALE) which has now turned into a standard benchmark for comparison of algorithms and results. In this sense, the RLE could be a worthy contribution to the field by encouraging new challenging domains for research.\n\nThat said, the main focus of this paper is presenting this new framework and showcasing the importance of new challenging domains. The results of experiments themselves are for existing algorithms. There are some new results that show reward shaping and policy shaping (having a bias toward going right in Super Mario) help during learning. And, yes, domain knowledge helps, but this is obvious. The rivalry training is an interesting idea, when training against a different opponent, the learner overfits to that opponent and forgets to play against the in-game AI; but then oddly, it gets evaluated on how well it does against the in-game AI! \n\nAlso the part of the paper that describes the scientific results (especially the rivalry training) is less polished, so this is disappointing. In the end, I'm not very excited about this paper.\n\nI was hoping for a more significant scientific contribution to accompany in this new environment. It's not clear if this is necessary for publication, but also it's not clear that ICLR is the right venue for this work due to the contribution being mainly about the new code (for example, mloss.org could be a better 'venue', JMLR has an associated journal track for accompanying papers: http://www.jmlr.org/mloss/)\n\n--- Post response:\n\nThank you for the clarifications. Ultimately I have not changed my opinion on the paper. Though I do think RLE could have a nice impact long-term, there is little new science in this paper, ad it's either too straight-forward (reward shaping, policy-shaping) or not quite developed enough (rivalry training).", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=H1f6QHHVl
README.md exists but content is empty.
Downloads last month
25