note_id
stringlengths
9
12
forum_id
stringlengths
9
13
invitation
stringlengths
40
95
content
stringlengths
44
35k
type
stringclasses
1 value
year
stringclasses
7 values
venue
stringclasses
171 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
2
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
forum_url
stringlengths
41
45
pdf_url
stringlengths
39
43
review_url
stringlengths
58
64
ryhZ3-M4l
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "Nice paper, strong accept", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=ryhZ3-M4l
HJyf86bNx
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "A nice contribution to differentially-private deep learning", "rating": "9: Top 15% of accepted papers, strong accept", "review": "Altogether a very good paper, a nice read, and interesting. The work advances the state of the art on differentially-private deep learning, is quite well-written, and relatively thorough.\n\nOne caveat is that although the approach is intended to be general, no theoretical guarantees are provided about the learning performance. Privacy-preserving machine learning papers often analyze both the privacy (in the worst case, DP setting) and the learning performance (often under different assumptions). Since the learning performance might depend on the choice of architecture; future experimentation is encouraged, even using the same data sets, with different architectures. If this will not be added, then please justify the choice of architecture used, and/or clarify what can be generalized about the observed learning performance.\n\nAnother caveat is that the reported epsilons are not those that can be privately released; the authors note that their technique for doing so would change the resulting epsilon. However this would need to be resolved in order to have a meaningful comparison to the epsilon-delta values reported in related work.\n\nFinally, as has been acknowledged in the paper, the present approach may not work on other natural data types. Experiments on other data sets is strongly encouraged. Also, please cite the data sets used.\n\nOther comments:\n\nDiscussion of certain parts of the related work are thorough. However, please add some survey/discussion of the related work on differentially-private semi-supervised learning. For example, in the context of random forests, the following paper also proposed differentially-private semi-supervised learning via a teacher-learner approach (although not denoted as \u201cteacher-learner\u201d). The only time the private labeled data is used is when learning the \u201cprimary ensemble.\u201d A \"secondary ensemble\" is then learned only from the unlabeled (non-private) data, with pseudo-labels generated by the primary ensemble.\n\nG. Jagannathan, C. Monteleoni, and K. Pillaipakkamnatt: A Semi-Supervised Learning Approach to Differential Privacy. Proc. 2013 IEEE International Conference on Data Mining Workshops, IEEE Workshop on Privacy Aspects of Data Mining (PADM), 2013.\n\nSection C. does a nice comparison of approaches. Please make sure the quantitative results here constitute an apples-to-apples comparison with the GAN results. \n\nThe paper is extremely well-written, for the most part. Some places needing clarification include:\n- Last paragraph of 3.1. \u201call teachers\u2026.get the same training data\u2026.\u201d This should be rephrased to make it clear that it is not the same w.r.t. all the teachers, but w.r.t. the same teacher on the neighboring database.\n- 4.1: The authors state: \u201cThe number n of teachers is limited by a trade-off between the classification task\u2019s complexity and the available data.\u201d However, since this tradeoff is not formalized, the statement is imprecise. In particular, if the analysis is done in the i.i.d. setting, the tradeoff would also likely depend on the relation of the target hypothesis to the data distribution.\n- Discussion of figure 3 was rather unclear in the text and caption and should be revised for clarity. In the text section, at first the explanation seems to imply that a larger gap is better (as is also indicated in the caption). However later it is stated that the gap stays under 20%. These sentences seem contradictory, which is likely not what was intended.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=HJyf86bNx
HJNWD6Z4l
HkwoSDPgg
ICLR.cc/2017/conference/-/paper45/official/review
{"title": "Good theory", "rating": "7: Good paper, accept", "review": "This paper discusses how to guarantee privacy for training data. In the proposed approach multiple models trained with disjoint datasets are used as ``teachers'' model, which will train a ``student'' model to predict an output chosen by noisy voting among all of the teachers. \n\nThe theoretical results are nice but also intuitive. Since teachers' result are provided via noisy voting, the student model may not duplicate the teacher's behavior. However, the probabilistic bound has quite a number of empirical parameters, which makes me difficult to decide whether the security is 100% guaranteed or not.\n\nThe experiments on MNIST and SVHN are good. However, as the paper claims, the proposed approach may be mostly useful for sensitive data like medical histories, it will be nice to conduct one or two experiments on such applications. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
["Nicolas Papernot", "Mart\u00edn Abadi", "\u00dalfar Erlingsson", "Ian Goodfellow", "Kunal Talwar"]
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as ''teachers'' for a ''student'' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.
["student", "model", "teachers", "knowledge transfer", "deep learning", "private training data", "data", "models", "machine", "applications"]
https://openreview.net/forum?id=HkwoSDPgg
https://openreview.net/pdf?id=HkwoSDPgg
https://openreview.net/forum?id=HkwoSDPgg&noteId=HJNWD6Z4l
BybRJGfNl
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Solid paper", "rating": "7: Good paper, accept", "review": "This paper proposed to use a simple count-based exploration technique in high-dimensional RL application (e.g., Atari Games). The counting is based on state hash, which implicitly groups (quantizes) similar state together. The hash is computed either via hand-designed features or learned features (unsupervisedly with auto-encoder). The new state to be explored receives a bonus similar to UCB (to encourage further exploration).\n\nOverall the paper is solid with quite extensive experiments. I wonder how it generalizes to more Atari games. Montezuma\u2019s Revenge may be particularly suitable for approaches that implicitly/explicitly cluster states together (like the proposed one), as it has multiple distinct scenarios, each with small variations in terms of visual appearance, showing clustering structures. On the other hand, such approaches might not work as well if the state space is fully continuous (e.g. in RLLab experiments). \n\nThe authors did not answer my question about why the hash code needs to be updated during training. I think it is mainly because the code still needs to be adaptive for a particular game (to achieve lower reconstruction error) in the first few iterations . After that stabilization is the most important. Sec. 2.3 (Learned embedding) is quite confusing (but very important). I hope that the authors could make it more clear (e.g., by writing an algorithm block) in the next version.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=BybRJGfNl
BJX3nErVg
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Final review: significant results in an important problem, but many moving parts", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes a new exploration scheme for reinforcement learning using locality-sensitive hashing states to build a table of visit counts which are then used to encourage exploration in the style of MBIE-EB of Strehl and Littman.\n\nSeveral points are appealing about this approach: first, it is quite simple compared to the current alternatives (e.g. VIME, density estimation and pseudo-counts). Second, the paper presents results across several domains, including classic benchmarks, continuous control domains, and Atari 2600 games. In addition, there are results for comparison from several other algorithms (DQN variants), many of which are quite recent. The results indicate that the approach clearly improves over the baseline. The results against other exploration algorithms are not as clear (more dependent on the individual domain/game), but I think this is fine as the appeal of the technique is its simplicity. Third, the paper presents results on the sensitivity to the granularity of the abstraction.\n\nI have only one main complaint, which is it seems there was some engineering involved to get this to work, and I do not have much confidence in the robustness of the conclusions. I am left uncertain as to how the story changes given slight perturbations over hyper-parameter values or enabling/disabling of certain choices. For example, how critical was using PixelCNN (or tying the weights?) or noisifying the output in the autoencoder, or what happens if you remove the custom additions to BASS? The granularity results show that the choice of resolution is sensitive, and even across games the story is not consistent.\n\nThe authors decide to use state-based counts instead of state-action based counts, deviating from the theory, which is odd because the reason to used LSH in the first place is to get closer to what MBIE-EB would advise via tabular counts. There are several explanations as to why state-based versus state-action based counts perform similarly in Atari; the authors do not offer any. Why?\n\nIt seems like the technique could be easily used in DQN as well, and many of the variants the authors compare to are DQN-based, so omitting DQN here again seems strange. The authors justify their choice of TRPO by saying it ensures safe policy improvement, though it is not clear that this is still true when adding these exploration bonuses.\n\nThe case study on Montezuma's revenge, while interesting, involves using domain knowledge and so does not really fit well with the rest of the paper.\n\nSo, in the end, simple and elegant idea to help with exploration tested in many domains, though I am not certain which of the many pieces are critical for the story to hold versus just slightly helpful, which could hurt the long-term impact of the paper.\n\n--- After response:\n\nThank you for the thorough response, and again my apologies for the late reply.\n\nI appreciate the follow-up version on the robustness of SimHash and state counting vs. state-action counting.\n\nThe paper addresses an important problem (exploration), suggesting a \"simple\" (compared to density estimation) counting method via hashing. It is a nice alternative approach to the one offered by Bellemare et al. If discussion among reviewers were possible, I would now try to assemble an argument to accept the paper. Specifically, I am not as concerned about beating the state of the art in Montezuma's as Reviewer3 as the merit of the current paper is one the simplicity of the hashing and on the wide comparison of domains vs. the baseline TRPO. This paper shows that we should not give up on simple hashing. There still seems to be a bunch of fiddly bits to get this to work, and I am still not confident that these results are easily reproducible. Nonetheless, it is an interesting new contrasting approach to exploration which deserves attention.\n\nNot important for the decision: The argument in the rebuttal concerning DQN & A3C is a bit of a straw man. I did not mention anything at all about A3C, I strictly referred to DQN, which is less sensitive to parameter-tuning than A3C. Also, Bellemare 2016 main result on Montezuma used DQN. Hence the omission of these techniques applied to DQN still seems a bit strange (for the Atari experiments). The figure S9 from Mnih et al. points to instances of asynchronous one-step Sarsa with varied thread counts.. of course this will be sensitive to parameters: it is both asynchronous online algorithms *and* the parameter varied is the thread count! This is hardly indicative of DQN's sensitivity to parameters, since DQN is (a) single-threaded (b) uses experience replay, leading to slower policy changes. Another source of stability, DQN uses a target network that changes infrequently. Perhaps the authors made a mistake in the reference graph in the figure? (I see no Figure 9 in https://arxiv.org/pdf/1602.01783v2.pdf , I assume the authors meant Figure S9)", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=BJX3nErVg
rkK1pXKNx
SyOvg6jxx
ICLR.cc/2017/conference/-/paper586/official/review
{"title": "Review", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces a new way of extending the count based exploration approach to domains where counts are not readily available. The way in which the authors do it is through hash functions. Experiments are conducted on several domains including control and Atari. \n\nIt is nice that the authors confirmed the results of Bellemare in that given the right \"density\" estimator, count based exploration can be effective. It is also great the observe that given the right features, we can crack games like Montezuma's revenge to some extend.\n\nI, however, have several complaints:\n\nFirst, by using hashing, the authors did not seem to be able to achieve significant improvements over past approaches. Without \"feature engineering\", the authors achieved only a fraction of the performance achieved in Bellemare et al. on Montezuma's Revenge. The proposed approaches In the control domains, the authors also does not outperform VIME. So experimentally, it is very hard to justify the approach. \n\nSecond, hashing, although could be effective in the domains that the authors tested on, it may not be the best way of estimating densities going forward. As the environments get more complicated, some learning methods, are required for the understanding of the environments instead of blind hashing. The authors claim that the advantage of the proposed method over Bellemare et al. is that one does not have to design density estimators. But I would argue that density estimators have become readily available (PixelCNN, VAEs, Real NVP, GANs) that they can be as easily applied as can hashing. Training the density estimators is not difficult problem as more.\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
["Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel"]
Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=SyOvg6jxx
https://openreview.net/pdf?id=SyOvg6jxx
https://openreview.net/forum?id=SyOvg6jxx&noteId=rkK1pXKNx
B15BdW8Vx
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "interesting extension of the result of Choromanska et al. but too incremental", "rating": "3: Clear rejection", "review": "This paper shows how spin glass techniques that were introduced in Choromanska et al. to analyze surface loss of deep neural networks can be applied to deep residual networks. This is an interesting contribution but it seems to me that the results are too similar to the ones in Choromanska et al. and thus the novelty is seriously limited. Main theoretical techniques described in the paper were already introduced and main theoretical results mentioned there were in fact already proved. The authors also did not get rid of lots of assumptions from Choromanska et al. (path-independence, assumptions about weights distributions, etc.).", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=B15BdW8Vx
rkva93GNg
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "Interesting theoretical analysis (with new supporting experiments) but presented in a slightly confusing fashion.", "rating": "7: Good paper, accept", "review": "Summary:\nIn this paper, the authors study ResNets through a theoretical formulation of a spin glass model. The conclusions are that ResNets behave as an ensemble of shallow networks at the start of training (by examining the magnitude of the weights for paths of a specific length) but this changes through training, through which the scaling parameter C (from assumption A4) increases, causing it to behave as an ensemble of deeper and deeper networks.\n\nClarity:\nThis paper was somewhat difficult to follow, being heavy in notation, with perhaps some notation overloading. A summary of some of the proofs in the main text might have been helpful.\n\nSpecific Comments:\n- In the proof of Lemma 2, I'm not sure where the sequence beta comes from (I don't see how it follows from 11?)\n\n- The ResNet structure used in the paper is somewhat different from normal with multiple layers being skipped? (Can the same analysis be used if only one layer is skipped? It seems like the skipping mostly affects the number of paths there are of a certain length?)\n\n- The new experiments supporting the scale increase in practice are interesting! I'm not sure about Theorems 3, 4 necessarily proving this link theoretically however, particularly given the simplifying assumption at the start of Section 4.2?\n\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=rkva93GNg
ryTj8pINe
Sk8csP5ex
ICLR.cc/2017/conference/-/paper423/official/review
{"title": "promising insightful results", "rating": "7: Good paper, accept", "review": "\nThis paper extend the Spin Glass analysis of Choromanska et al. (2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets.\n\nThe paper is well-written with many insightful explanation of results. Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al. (2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets. \n\nIt is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature. Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim. At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper.\n\nThis work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too).", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
The loss surface of residual networks: Ensembles and the role of batch normalization
["Etai Littwin", "Lior Wolf"]
Deep Residual Networks present a premium in performance in comparison to conventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensemble are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network’s depth, as training progresses, it becomes deeper and deeper. The main mechanism that controls the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models, which we also use in order to study the number of critical points in the optimization of Residual Networks.
["Deep learning", "Theory"]
https://openreview.net/forum?id=Sk8csP5ex
https://openreview.net/pdf?id=Sk8csP5ex
https://openreview.net/forum?id=Sk8csP5ex&noteId=ryTj8pINe
SJKENmk4l
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "The topic of the paper, model-based RL with a learned model, is important and timely. The paper is well written. I feel that the presented results are too incremental. Augmenting the frame prediction network with another head that predicts the reward is a very sensible thing to do. However neither the methodology not the results are novel / surprising, given that the original method of [Oh et al. 2015] already learns to successfully increment score counters in predicted frames in many games.\n\nI\u2019m very much looking forward to seeing the results of applying the learned joint model of frames and rewards to model-based RL as proposed by the authors. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=SJKENmk4l
ryuwhyQ4e
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "Final Review", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces an additional reward-predicting head to an existing NN architecture for video frame prediction. In Atari game playing scenarios, the authors show that this model can successfully predict both reward and next frames.\n\nPros:\n- Paper is well written and easy to follow.\n- Model is clear to understand.\n\nCons:\n- The model is incrementally different than the baseline. The authors state that their purpose is to establish a pre-condition, which they achieve. But this makes the paper quite limited in scope.\n\nThis paper reads like the start of a really good long paper, or a good short paper. Following through on the future work proposed by the authors would make a great paper. As it stands, the paper is a bit thin on new contributions.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=ryuwhyQ4e
SkchXXWVe
BJxhLAuxg
ICLR.cc/2017/conference/-/paper69/official/review
{"title": "Well written paper with a clear focus and interesting future work proposal but with an overall minor contribution.", "rating": "4: Ok but not good enough - rejection", "review": "The paper extends a recently proposed video frame prediction method with reward prediction in order to learn the unknown system dynamics and reward structure of an environment. The method is tested on several Atari games and is able to predict the reward quite well within a range of about 50 steps. The paper is very well written, focussed and is quite clear about its contribution to the literature. The experiments and methods are sound. However, the results are not really surprising given that the system state and the reward are linked deterministically in Atari games. In other words, we can always decode the reward from a network that successfully encodes future system states in its latent representation. The contribution of the paper is therefore minor. The paper would be much stronger if the authors could include experiments on the two future work directions they suggest in the conclusions: augmenting training with artificial samples and adding Monte-Carlo tree search. The suggestions might decrease the number of real-world training samples and increase performance, both of which would be very interesting and impactful.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games
["Felix Leibfried", "Nate Kushman", "Katja Hofmann"]
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
["atari games", "environments", "deep learning", "joint video frame", "reward prediction", "unknown", "techniques", "reward structure", "reinforcement learning approaches"]
https://openreview.net/forum?id=BJxhLAuxg
https://openreview.net/pdf?id=BJxhLAuxg
https://openreview.net/forum?id=BJxhLAuxg&noteId=SkchXXWVe
rkYg2xjEg
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Why benchmark techniques for IoT on a Xeon?", "rating": "6: Marginally above acceptance threshold", "review": "Dyvedeep presents three approximation techniques for deep vision models aimed at improving inference speed.\nThe techniques are novel as far as I know.\nThe paper is clear, the results are plausible.\n\nThe evaluation of the proposed techniques is does not make a compelling case that someone interested in faster inference would ultimately be well-served by a solution involving the proposed methods.\n\nThe authors delineate \"static\" acceleration techniques (e.g. reduced bit-width, weight pruning) from \"dynamic\" acceleration techniques which are changes to the inference algorithm itself. The delineation would be fine if the use of each family of techniques were independent of the other, but this is not the case. For example, the use of SPET would, I think, conflict with the use of factored weight matrices (I recall this from http://papers.nips.cc/paper/5025-predicting-parameters-in-deep-learning.pdf, but I suspect there may be more recent work). For this reason, a comparison between SPET and factored weight matrices would strengthen the case that SPET is a relevant innovation. In favor of the factored-matrix approach, there would I think be fewer hyperparameters and the computations would make more-efficient use of blocked linear algebra routines--the case for the superiority of SPET might be difficult to make.\n\nThe authors also do not address their choice of the Xeon for benchmarking, when the use cases they identify in the introduction include \"low power\" and \"deeply embedded\" applications. In these sorts of applications, a mobile GPU would be used, not a Xeon. A GPU implementation of a convnet works differently than a CPU implementation in ways that might reduce or eliminate the advantage of the acceleration techniques put forward in this paper.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=rkYg2xjEg
BkLHl2ZEe
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Interesting ideas, but I'm not sure about the significance.", "rating": "7: Good paper, accept", "review": "This work proposes a number of approximations for speeding up feed-forward network computations at inference time. Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example. \n\nSpeeding up inference is an important problem and this work takes a novel approach. The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good. This is a high-quality paper. I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer. \n\nMy main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations. In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization. I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches. Thus, I am not sure how much of an impact this approach can have in it's current state.\n\nSuggestions:\n-I wondered what (if any) bounds could be made on the approximation errors of the proposed methods?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=BkLHl2ZEe
H1nMEJZ4g
BJmCKBqgl
ICLR.cc/2017/conference/-/paper262/official/review
{"title": "Interesting and clearly written paper. My main concerns about this paper, are about the novelty, and the advantages of the proposed techniques over related papers in the area.", "rating": "6: Marginally above acceptance threshold", "review": "The authors describe a series of techniques which can be used to reduce the total amount of computation that needs to be performed in Deep Neural Networks. The authors propose to selectively identify how important a certain set of computations is to the final DNN output, and to use this information to selectively skip certain computations in the network. As deep learning technologies become increasingly widespread on mobile devices, techniques which enable efficient inference on such devices are becoming increasingly important for practical applications. \n\nThe paper is generally well-written and clear to follow. I had two main comments that concern the experimental design, and the relationship to previous work:\n\n1. In the context of deployment on mobile devices, computational costs in terms of both system memory as well as processing are important consideration. While the proposed techniques do improve computational costs, they don\u2019t reduce model size in terms of total number of parameters. Also, the gains obtained using the proposed method appear to be similar to other works that do allow for improvements in terms of both memory and computation (see, e.g., (Han et al., 2015)). It would have been interesting if the authors had reported results when the proposed techniques were applied to models that have been compressed in size as well.\n\nS. Han, H. Mao, and W. J. Dally. \"Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding.\" arXiv prepring arXiv:1510.00149 (2015).\n\n2. The SDSS technique in the paper appears to be very similar to the \u201cPerforated CNN\u201d technique proposed by Figurnov et al. (2015). In that work, as in the authors work, CNN activations are approximated by interpolating responses from neighbors. The authors should comment on the similarity and differences between the proposed method and the referenced work.\n\nFigurnov, Michael, Dmitry Vetrov, and Pushmeet Kohli. \"Perforatedcnns: Acceleration through elimination of redundant convolutions.\" arXiv preprint arXiv:1504.08362 (2015).\n\nOther minor comments appear below:\n\n3. A clarification question: In comparing the proposed methods to the baseline, in Section 4, the authors mention that they used their own custom implementation. However, do the baselines use the same custom implementation, or do they used the optimized BLAS libraries?\n\n4. The authors should also consider citing the following additional references:\n * S. Tan and K. C. Sim, \"Towards implicit complexity control using variable-depth deep neural networks for automatic speech recognition,\" 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 2016, pp. 5965-5969.\n * Graves, Alex. \"Adaptive Computation Time for Recurrent Neural Networks.\" arXiv preprint arXiv:1603.08983 (2016).\n\n5. Please explain what the Y-axis in Figure 7 represents in the text.\n\n6. Typographical Error: Last paragraph of Section 2: \u201c... are qualitatively different the aforementioned ...\u201d \u2192 \u201c... are qualitatively different from the aforementioned ...\u201d", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
["Sanjay Ganapathy", "Swagath Venkataramani", "Balaraman Ravindran", "Anand Raghunathan"]
Deep Neural Networks (DNNs) have advanced the state-of-the-art on a variety of machine learning tasks and are deployed widely in many real-world products. However, the compute and data requirements demanded by large-scale DNNs remains a significant challenge. In this work, we address this challenge in the context of DNN inference. We propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), which exploit the heterogeneity in the characteristics of inputs to DNNs to improve their compute efficiency while maintaining the same classification accuracy. DyVEDeep equips DNNs with dynamic effort knobs, which in course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computations, while the skipping/approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks on 3 image datasets---MNIST, CIFAR and ImageNet. Across all benchmarks, DyVEDeep achieves 2.1X-2.6X reduction in number of scalar operations, which translates to 1.9X-2.3X performance improvement over a Caffe-based sequential software implementation, for negligible loss in accuracy.
["dyvedeep", "dnns", "input", "variety", "machine learning tasks", "many", "products", "compute"]
https://openreview.net/forum?id=BJmCKBqgl
https://openreview.net/pdf?id=BJmCKBqgl
https://openreview.net/forum?id=BJmCKBqgl&noteId=H1nMEJZ4g
BkcY-CZNl
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Updated Review", "rating": "7: Good paper, accept", "review": "The paper reports several connections between the image representations in state-of-the are object recognition networks and findings from human visual psychophysics:\n1) It shows that the mean L1 distance in the feature space of certain CNN layers is predictive of human noise-detection thresholds in natural images.\n2) It reports that for 3 different 2-AFC tasks for which there exists a condition that is hard and one that is easy for humans, the mutual information between decision label and quantised CNN activations is usually higher in the condition that is easier for humans.\n3) It reproduces the general bandpass nature of contrast/frequency detection sensitivity in humans. \n\nWhile these findings appear interesting, they are also rather anecdotal and some of them seem to be rather trivial (e.g. findings in 2). To make a convincing statement it would be important to explore what aspects of the CNN lead to the reported findings. One possible way of doing that could be to include good baseline models to compare against. As I mentioned before, one such baseline should be reasonable low-level vision model. Another interesting direction would be to compare the results for the same network at different training stages.\n\nIn that way one might be able to find out which parts of the reported results can be reproduced by simple low-level image processing systems, which parts are due to the general deep network\u2019s architecture and which parts arise from the powerful computational properties (object recognition performance) of the CNNs.\n\nIn conclusion, I believe that establishing correspondences between state-of-the art CNNs and human vision is a potentially fruitful approach. However to make a convincing point that found correspondences are non-trivial, it is crucial to show that non-trivial aspects of the CNN lead to the reported findings, which was not done. Therefore, the contribution of the paper is limited since I cannot judge whether the findings really tell me something about a unique relation between high-performing CNNs and the human visual system.\n\nUPDATE:\n\nThank you very much for your extensive revision and inclusion of several of the suggested baselines. \nThe results of the baseline models often raise more questions and make the interpretation of the results more complex, but I feel that this reflects the complexity of the topic and makes the work rather more worthwhile. \n\nOne further suggestion: As the experiments with the snapshots of the CaffeNet shows, the direct relationship between CNN performance and prediction accuracy of biological vision known from Yamins et al. 2014 and Cadieu et al. 2014 does not necessarily hold in your experiments. I think this should be discussed somewhere in the paper.\n\nAll in all, I think that the paper now constitutes a decent contribution relating state-of-the art CNNs to human psychophysics and I would be happy for this work to be accepted.\n\nI raise the my rating for this paper to 7.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=BkcY-CZNl
H19W6GPVl
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Review of \"Human Perception in Computer Vision\"", "rating": "6: Marginally above acceptance threshold", "review": "The author works to compare DNNs to human visual perception, both quantitatively and qualitatively. \n\nTheir first result involves performing a psychophysical experiment both on humans and on a model and then comparing the results (actually I think the psychophysical data was collected in a different work, and is just used here). The specific psychophysical experiment determined, separately for each of a set of approx. 1110 images, what the noise level of additive noise would have to be to make a just-noticeable-difference for humans in discriminating the noiseless image from the noisy one. The authors then define a metric on neural networks that allows them to measure what they posit might be a similar property for the networks. They then correlate the pattern of noise levels between neural networks that the humans. Deep neural networks end up being much better predictors of the human pattern of noise levels than simpler measure of image perturbation (e.g. RMS contrast). \n\nA second result involves comparing DNNs to humans in terms of their pattern errors in a series of highly controlled experiments using stimuli that illustrate classic properties of human visual processing -- including segmentation, crowding and shape understanding. They then used an information-theoretic single-neuron metric of discriminability to assess similar patterns of errors for the DNNs. Again, top layers of DNNs were able to reproduce the human patterns of difficulty across stimuli, at least to some extent. \n\nA third result involves comparing DNNs to humans in terms of their pattern of contrast sensitivity across a series of sine-grating images at different frequencies. (There is a classic result from vision research as to what this pattern should be, so it makes a natural target for comparison to models.) The authors define a DNN correlate for the propertie in terms of the cross-neuron average of the L1-distance between responses to a blank image and responses to a sinuisoid of each contrast and frequency. They then qualitatively compare the results of this metric for DNNs models to known results from the literature on humans, finding that, like humans, there is an apparent bandpass response for low-contrast gratings and a mostly constant response at high contrast. \n\nPros:\n * The general concept of comparing deep nets to psychophysical results in a detailed, quantitative way, is really nice. \n\n * They nicely defined a set of \"linking functions\", e.g. metrics that express how a specific behavioral result is to be generated from the neural network. (Ie. the L1 metrics in results 1 and 3 and the information-theoretic measure in result 2.) The framework for setting up such linking functions seems like a great direction to me. \n\n * The actual psychophysical data seems to have been handled in a very careful and thoughtful way. These folks clearly know what they're doing on the psychophysical end. \n\n\nCons:\n * To my mind, the biggest problem wit this paper is that that it doesn't say something that we didn't really know already. Existing results have shown that DNNs are pretty good models of the human visual system in a whole bunch of ways, and this paper adds some more ways. What would have been great would be: \n (a) showing that they metric of comparison to humans that was sufficiently sensitive that it could pull apart various DNN models, making one clearly better than the others. \n (b) identifying a wide gap between the DNNs and the humans that is still unfilled. They sort of do this, since while the DNNs are good at reproducing the human judgements in Result 1, they are not perfect -- gap is between 60% explained variance and 84% inter-human consistency. This 24% gap is potentially important, so I'd really like to see them have explored that gap more -- e.g. (i) widening the gap by identifying which images caused the gap most and focusing a test on those, or (ii) closing the gap by training a neural network to get the pattern 100% correct and seeing if that made better CNNs as measured on other metrics/tasks. \n\nIn other words, I would definitely have traded off not having results 2 and 3 for a deeper exploration of result 1. I think their overall approach could be very fruitful, but it hasn't really been carried far enough here. \n\n * I found a few things confusing about the layout of the paper. I especially found that the quantitative results for results 2 and 3 were not clearly displayed. Why was figure 8 relegated to the appendix? Where are the quantifications of model-human similarities for the data shown in Figure 8? Isn't this the whole meat of their second result? This should really be presented in a more clear way. \n\n * Where is the quantification of model-human similarity for the data show in Figure 3? Isn't there a way to get the human contrast-sensitivity curve and then compare it to that of models in a more quantitively precise way, rather than just note a qualitative agreement? It seems odd to me that this wasn't done. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=H19W6GPVl
ByL97qNEg
BJbD_Pqlg
ICLR.cc/2017/conference/-/paper403/official/review
{"title": "Review of \"HUMAN PERCEPTION IN COMPUTER VISION\"", "rating": "6: Marginally above acceptance threshold", "review": "This paper compares the performance, in terms of sensitivity to perturbations, of multilayer neural networks to human vision. In many of the tasks tested, multilayer neural networks exhibit similar sensitivities as human vision. \n\nFrom the tasks used in this paper one may conclude that multilayer neural networks capture many properties of the human visual system. But of course there are well known adversarial examples in which small, perceptually invisible perturbations cause catastrophic errors in categorization, so against that backdrop it is difficult to know what to make of these results. That the two systems exhibit similar phenomenologies in some cases could mean any number of things, and so it would have been nice to see a more in depth analysis of why this is happening in some cases and not others. For example, for the noise perturbations described in the the first section, one sees already that conv2 is correlated with human sensitivity. So why not examine how the first layer filters are being combined to produce this contextual effect? From that we might actually learn something about neural mechanisms.\n\nAlthough I like and am sympathetic to the direction the author is taking here, I feel it just scratches the surface in terms of analyzing perceptual correlates in multilayer neural nets. \n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Human perception in computer vision
["Ron Dekel"]
Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Biological vision (learned in life and through evolution) is also accurate and general-purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the human system-level computation of visual perception has DNN correlates and considered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation, crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human perception are a consequence of architecture-independent visual learning.
["Computer vision", "Transfer Learning"]
https://openreview.net/forum?id=BJbD_Pqlg
https://openreview.net/pdf?id=BJbD_Pqlg
https://openreview.net/forum?id=BJbD_Pqlg&noteId=ByL97qNEg
HkMx83V4l
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "Complex implementation of a differentiable memory as a graph with promising preliminary results.", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper proposes learning on the fly to represent a dialog as a graph (which acts as the memory), and is first demonstrated on the bAbI tasks. Graph learning is part of the inference process, though there is long term representation learning to learn graph transformation parameters and the encoding of sentences as input to the graph. This seems to be the first implementation of a differentiable memory as graph: it is much more complex than previous approaches like memory networks without significant gain in performance in bAbI tasks, but it is still very preliminary work, and the representation of memory as a graph seems much more powerful than a stack. Clarity is a major issue, but from an initial version that was constructive and better read by a computer than a human, the author proposed a hugely improved later version. This original, technically accurate (within what I understood) and thought provoking paper is worth publishing.\n\nThe preliminary results do not tell us yet if the highly complex graph-based differentiable memory has more learning or generalization capacity than other approaches. The performance on the bAbI task is comparable to the best memory networks, but still worse than more traditional rule induction (see http://www.public.asu.edu/~cbaral/papers/aaai2016-sub.pdf). This is still clearly promising.\n\n The sequence of transformation in algorithm 1 looks sensible, though the authors do not discuss any other operation ordering. In particular, it is not clear to me that you need the node state update step T_h if you have the direct reference update step T_h,direct. \n\nIt is striking that the only trick that is essential for proper performance is the \u2018direct reference\u2019 , which actually has nothing to do with the graph building process, but is rather an attention mechanism for the graph input: attention is focused on words that are relevant to the node type rather than the whole sentence. So the question \u201chow useful are all these graph operations\u201d remain. A much simpler version of a similar trick may have been proposed in the context of memory networks, also for ICLR'17 (see match type in \"LEARNING END-TO-END GOAL-ORIENTED DIALOG\" by Bordes et al)\n\n\nThe authors also mention the time and size needed to train the model: is the issue arising for learning, inference or both? A description of the actual implementation would help (no pointer to open source code is provide). The author mentions Theano in one of my questions: how are the transformations compiled in advance as units? How is the gradient back-propagated through the graph is this one is only described at runtime?\n\n\nTypo: in the appendices B.2 and B.2.1, the right side of the equation that applies the update gate has h\u2019_nu while it should be h_nu.\n\nIn the references, the author could mention the pioneering work of Lee Giles on representing graphs with RNNs.\n\nRevision: I have improved my rating for the following reasons:\n- Pointers to an highly readable and well structured Theano source is provided.\n- The delta improvement of the paper has been impressive over the review process, and I am confident this will be an impactful paper.\n- Much simpler alternatives approaches such as Memory Networks seem to be plateauing for problems such as dialog modeling, we need alternatives.\n- The architecture is this work is still too complex, but this is often as we start with DNNs, and then find simplifications that actually improve performance\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=HkMx83V4l
Hk_mPh-4e
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "", "rating": "9: Top 15% of accepted papers, strong accept", "review": "The paper proposes an extension of the Gated Graph Sequence Neural Network by including in this model the ability to produce complex graph transformations. The underlying idea is to propose a method that will be able build/modify a graph-structure as an internal representation for solving a problem, and particularly for solving question-answering problems in this paper. The author proposes 5 different possible differentiable transformations that will be learned on a training set, typically in a supervised fashion where the state of the graph is given at each timestep. A particular occurence of the model is presented that takes a sequence as an input a iteratively update an internal graph state to a final prediction, and which can be applied for solving QA tasks (e.g BaBi) with interesting results.\n\nThe approach in this paper is really interesting since the proposed model is able to maintain a representation of its current state as a complex graph, but still keeping the property of being differentiable and thus easily learnable through gradient-descent techniques. It can be seen as a succesfull attempt to mix continuous and symbolic representations. It moreover seems more general that the recent attempts made to add some 'symbolic' stuffs in differentiable models (Memory networks, NTM, etc...) since the shape of the state is not fixed here and can evolve. My main concerns is about the way the model is trained i.e by providing the state of the graph at each timestep which can be done for particular tasks (e.g Babi) only, and cannot be the solution for more complex problems. My other concern is about the whole content of the paper that would perhaps best fit a journal format and not a conference format, making the article still difficult to read due to its density. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=Hk_mPh-4e
SkibszLEx
HJ0NvFzxl
ICLR.cc/2017/conference/-/paper10/official/review
{"title": "Architecture which allows to learn graph->graph tasks, improves state of the art on babi", "rating": "7: Good paper, accept", "review": "The main contribution of this paper seems to be an introduction of a set of differential graph transformations which will allow you to learn graph->graph classification tasks using gradient descent. This maps naturally to a task of learning a cellular automaton represented as sequence of graphs. In that task, the graph of nodes grows at each iteration, with nodes pointing to neighbors and special nodes 0/1 representing the values. Proposed architecture allows one to learn this sequence of graphs, although in the experiment, this task (Rule 30) was far from solved.\n\nThis idea is combined with ideas from previous papers (GGS-NN) to allow the model to produce textual output rather than graph output, and use graphs as intermediate representation, which allows it to beat state of the art on BaBi tasks. ", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Learning Graphical State Transitions
["Daniel D. Johnson"]
Graph-structured data is important in modeling relationships between multiple entities, and can be used to represent states of the world as well as many data structures. Li et al. (2016) describe a model known as a Gated Graph Sequence Neural Network (GGS-NN) that produces sequences from graph-structured input. In this work I introduce the Gated Graph Transformer Neural Network (GGT-NN), an extension of GGS-NNs that uses graph-structured data as an intermediate representation. The model can learn to construct and modify graphs in sophisticated ways based on textual input, and also to use the graphs to produce a variety of outputs. For example, the model successfully learns to solve almost all of the bAbI tasks (Weston et al., 2016), and also discovers the rules governing graphical formulations of a simple cellular automaton and a family of Turing machines.
["Natural language processing", "Deep learning", "Supervised Learning", "Structured prediction"]
https://openreview.net/forum?id=HJ0NvFzxl
https://openreview.net/pdf?id=HJ0NvFzxl
https://openreview.net/forum?id=HJ0NvFzxl&noteId=SkibszLEx
Hkes73e4g
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper presents a new, public dataset and tasks for goal-oriented dialogue applications. The dataset and tasks are constructed artificially using rule-based programs, in such a way that different aspects of dialogue system performance can be evaluated ranging from issuing API calls to displaying options, as well as full-fledged dialogue.\n\nThis is a welcome contribution to the dialogue literature, which will help facilitate future research into developing and understanding dialogue systems. Still, there are pitfalls in taking this approach. First, it is not clear how suitable Deep Learning models are for these tasks compared to traditional methods (rule-based systems or shallow models), since Deep Learning models are known to require many training examples and therefore performance difference between different neural networks may simply boil down to regularization techniques. The tasks 1-5 are also completely deterministic, which means evaluating performance on these tasks won't measure the ability of the models to handle noisy and ambiguous interactions (e.g. inferring a distribution over user goals, or executing dialogue repair strategies), which is a very important aspect in dialogue applications. Overall, I still believe this is an interesting direction to explore.\n\nAs discussed in the comments below, the paper does not have any baseline model with word order information. I think this is a strong weakness of the paper, because it makes the neural networks appear unreasonably strong, yet simpler baselines could very likely be be competitive (or better) than the proposed neural networks. To maintain a fair evaluation and correctly assess the power of representation learning for this task, I think it's important that the authors experiment with one additional non-neural network benchmark model which takes into account word order information. This would more convincly demonstrate the utility of Deep Learning models for this task. For example, the one could experiment with a logistic regression model which takes as input 1) word embeddings (similar to the Supervised Embeddings model), 2) bi-gram features, and 3) match-type features. If such a baseline is included, I will increase my rating to 8.\n\n\n\nFinal minor comment: in the conclusion, the paper states \"the existing work has no well defined measures of performances\". This is not really true. End-to-end trainable models for task-oriented dialogue have well-defined performance measures. See, for example \"A Network-based End-to-End Trainable Task-oriented Dialogue System\" by Wen et al. On the other hand, non-goal-oriented dialogue are generally harder to evaluate, but given human subjects these can also be evaluated. In fact, this is what Liu et al (2016) do for Twitter. See also, \"Strategy and Policy Learning for Non-Task-Oriented Conversational Systems\" by Yu et al.\n\n----\n\nI've updated my score following the new results added in the paper.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=Hkes73e4g
Bk118K4Ne
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Thought provoking paper, more on the metrics than the algorithms.", "rating": "8: Top 50% of accepted papers, clear accept", "review": "Attempts to use chatbots for every form of human-computer interaction has been a major trend in 2016, with claims that they could solve many forms of dialogs beyond simple chit-chat. This paper represents a serious reality check. While it is mostly relevant for Dialog/Natural Language venues (to educate software engineer about the limitations of current chatbots), it can also be published at Machine Learning venues (to educate researchers about the need for more realistic validation of ML applied to dialogs), so I would consider this work of high significance.\n\nTwo important conjectures are underlying this paper and likely to open to more research. While they are not in writing, Antoine Bordes clearly stated them during a NIPS workshop presentation that covered this work. Considering the metrics chosen in this paper:\n1)\tThe performance of end2end ML approaches is still insufficient for goal oriented dialogs.\n2)\tWhen comparing algorithms, relative performance on synthetic data is a good predictor of performance on natural data. This would be quite a departure from previous observations, but the authors made a strong effort to match the synthetic and natural conditions.\n\nWhile its original algorithmic contribution consists in one rather simple addition to memory networks (match type), it is the first time these are deployed and tested on a goal-oriented dialog, and the experimental protocol is excellent. The overall paper clarity is excellent and accessible to a readership beyond ML and dialog researchers. I was in particular impressed by how the short appendix on memory networks summarized them so well, followed by the tables that explained the influence of the number of hops.\n\nWhile this paper represents the state-of-the-art in the exploration of more rigorous metrics for dialog modeling, it also reminds us how brittle and somewhat arbitrary these remain. Note this is more a recommendation for future research than for revision.\n\nFirst they use the per-response accuracy (basically the next utterance classification among a fixed list of responses). Looking at table 3 clearly shows how absurd this can be in practice: all that matters is a correct API call and a reasonably short dialog, though this would only give us a 1/7 accuracy, as the 6 bot responses needed to reach the API call also have to be exact.\n\nWould the per-dialog accuracy, where all responses must be correct, be better? Table 2 shows how sensitive it is to the experimental protocol. I was initially puzzled that the accuracy for subtask T3 (0.0) was much lower that the accuracy for the full dialog T5 (19.7), until the authors pointed me to the tasks definitions (3.1.1) where T3 requires displaying 3 options while T5 only requires displaying one.\n\nFor the concierge data, what would happen if \u2018correct\u2019 meant being the best, not among the 5-best? \n\nWhile I cannot fault the authors for using standard dialog metrics, and coming up with new ones that are actually too pessimistic, I can think of one way to represent dialogs that could result in more meaningful metrics in goal oriented dialogs. Suppose I sell Virtual Assistants as a service, being paid upon successful completion of a dialog. What is the metric that would maximize my revenue? In this restaurant problem, the loss would probably be some weighted sum of the number of errors in the API call, the number of turns to reach that API call and the number of rejected options by the user. However, such as loss cannot be measured on canned dialogs and would either require a real human user or an realistic simulator\n\nAnother issue closely related to representation learning that this paper fails to address or explain properly is what happens if the vocabulary used by the user does not match exactly the vocabulary in the knowledge base. In particular, for the match type algorithm to code \u2018Indian\u2019 as \u2018type of cuisine\u2019, this word would have to occur exactly in the KB. I can imagine situations where the KB uses some obfuscated terminology, and we would like ML to learn the associations rather than humans to hand-describe them.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=Bk118K4Ne
rky-ix7Ee
S1Bb3D5gg
ICLR.cc/2017/conference/-/paper428/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "SYNOPSIS:\nThis paper introduces a new dataset for evaluating end-to-end goal-oriented dialog systems. All data is generated in the restaurant setting, where the goal is to find availability and eventually book a table based on parameters provided by the user to the bot as part of a dialog. Data is generated by running a simulation using an underlying knowledge base to generate samples for the different parameters (cuisine, price range, etc), and then applying rule-based transformations to render natural language descriptions. The objective is to rank a set of candidate responses for each next turn of the dialog, and evaluation is reported in terms of per-response accuracy and per-dialog accuracy. The authors show that Memory Networks are able to improve over basic bag-of-words baselines.\n\nTHOUGHTS:\nI want to thank the authors for an interesting contribution. Having said that, I am skeptical about the utility of end-to-end trained systems in the narrow-domain setting. In the open-domain setting, there is a strong argument to be made that hand-coding all states and responses would not scale, and hence end-to-end trained methods make a lot of sense. However, in the narrow-domain setting, we usually know and understand the domain quite well, and the goal is to obtain high user satisfaction. Doesn't it then make sense in these cases to use the domain knowledge to engineer the best system possible?\n\nGiven that the domain is already restricted, I'm also a bit disappointed that the goal is to RANK instead of GENERATE responses, although I understand that this makes evaluation much easier. I'm also unsure how these candidate responses would actually be obtained in practice? It seems that the models rank the set of all responses in train/val/test (last sentence before Sec 3.2). Since a key argument for the end-to-end training approach is ease of scaling to new domains without having to manually re-engineer the system, where is this information obtained for a new domain in practice? Generating responses would allow much better generalization to new domains, as opposed to simply ranking some list of hand-collected generic responses, and in my mind this is the weakest part of this work.\n\nFinally, as data is generated using a simulation by expanding (cuisine, price, ...) tuples using NL-generation rules, it necessarily constrains the variability in the training responses. Of course, this is traded off with the ability to generate unlimited data using the simulator. But I was unable to see the list of rules that was used. It would be good to publish this as well.\n\nOverall, despite my skepticism, I think it is an interesting contribution worthy of publication at the conference. \n\n------\n\nI've updated my score following the clarifications and new results.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning End-to-End Goal-Oriented Dialog
["Antoine Bordes", "Y-Lan Boureau", "Jason Weston"]
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
["dialog", "applications", "dialog systems", "data", "lot", "handcrafting", "new domains", "components", "dialogs"]
https://openreview.net/forum?id=S1Bb3D5gg
https://openreview.net/pdf?id=S1Bb3D5gg
https://openreview.net/forum?id=S1Bb3D5gg&noteId=rky-ix7Ee
r1w-zAZ4e
r10FA8Kxg
ICLR.cc/2017/conference/-/paper102/official/review
{"title": "Experimental comparison of shallow, deep, and (non)-convolutional architectures with a fixed parameter budget", "rating": "7: Good paper, accept", "review": "This paper aims to investigate the question if shallow non-convolutional networks can be as affective as deep convolutional ones for image classification, given that both architectures use the same number of parameters. \nTo this end the authors conducted a series of experiments on the CIFAR10 dataset.\nThey find that there is a significant performance gap between the two approaches, in favour of deep CNNs. \nThe experiments are well designed and involve a distillation training approach, and the results are presented in a comprehensive manner.\nThey also observe (as others have before) that student models can be shallower than the teacher model from which they are trained for comparable performance.\n\nMy take on these results is that they suggest that using (deep) conv nets is more effective, since this model class encodes a form of a-prori or domain knowledge that images exhibit a certain degree of translation invariance in the way they should be processed for high-level recognition tasks. The results are therefore perhaps not quite surprising, but not completely obvious either.\n\nAn interesting point on which the authors comment only very briefly is that among the non-convolutional architectures the ones using 2 or 3 hidden layers outperform those with 1, 4 or 5 hidden layers. Do you have an interpretation / hypothesis of why this is the case? It would be interesting to discuss the point a bit more in the paper.\n\nIt was not quite clear to me why were the experiments were limited to use 30M parameters at most. None of the experiments in Figure 1 seem to be saturated. Although the performance gap between CNN and MLP is large, I think it would be worthwhile to push the experiment further for the final version of the paper.\n\nThe authors state in the last paragraph that they expect shallow nets to be relatively worse in an ImageNet classification experiment. \nCould the authors argue why they think this to be the case? \nOne could argue that the much larger training dataset size could compensate for shallow and/or non-convolutional choices of the architecture. \nSince MLPs are universal function approximators, one could understand architecture choices as expressions of certain priors over the function space, and in a large-data regimes such priors could be expected to be of lesser importance.\nThis issue could for example be examined on ImageNet when varying the amount of training data.\nAlso, the much higher resolution of ImageNet images might have a non-trivial impact on the CNN-MLP comparison as compared to the results established on the CIFAR10 dataset.\n\nExperiments on a second data set would also help to corroborate the findings, demonstrating to what extent such findings are variable across datasets.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
["Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson", "Rich Caruana"]
Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.
["Deep learning", "Transfer Learning"]
https://openreview.net/forum?id=r10FA8Kxg
https://openreview.net/pdf?id=r10FA8Kxg
https://openreview.net/forum?id=r10FA8Kxg&noteId=r1w-zAZ4e
BkaSqlzEe
r10FA8Kxg
ICLR.cc/2017/conference/-/paper102/official/review
{"title": "Experimental paper with interesting results. Well written. Solid experiments. ", "rating": "7: Good paper, accept", "review": "Description.\nThis paper describes experiments testing whether deep convolutional networks can be replaced with shallow networks with the same number of parameters without loss of accuracy. The experiments are performed on he CIFAR 10 dataset where deep convolutional teacher networks are used to train shallow student networks using L2 regression on logit outputs. The results show that similar accuracy on the same parameter budget can be only obtained when multiple layers of convolution are used. \n\nStrong points.\n- The experiments are carefully done with thorough selection of hyperparameters. \n- The paper shows interesting results that go partially against conclusions from the previous work in this area (Ba and Caruana 2014).\n- The paper is well and clearly written.\n\nWeak points:\n- CIFAR is still somewhat toy dataset with only 10 classes. It would be interesting to see some results on a more challenging problem such as ImageNet. Would the results for a large number of classes be similar?\n\nOriginality:\n- This is mainly an experimental paper, but the question it asks is interesting and worth investigation. The experimental results are solid and provide new insights.\n\nQuality:\n- The experiments are well done.\n\nClarity:\n- The paper is well written and clear.\n\nSignificance:\n- The results go against some of the conclusions from previous work, so should be published and discussed.\n\nOverall:\nExperimental paper with interesting results. Well written. Solid experiments. \n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
["Gregor Urban", "Krzysztof J. Geras", "Samira Ebrahimi Kahou", "Ozlem Aslan", "Shengjie Wang", "Abdelrahman Mohamed", "Matthai Philipose", "Matt Richardson", "Rich Caruana"]
Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.
["Deep learning", "Transfer Learning"]
https://openreview.net/forum?id=r10FA8Kxg
https://openreview.net/pdf?id=r10FA8Kxg
https://openreview.net/forum?id=r10FA8Kxg&noteId=BkaSqlzEe
BkxN0nr4l
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Optimization of a ReLU network under new assumptions", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This work analyzes the continuous-time dynamics of gradient descent when training two-layer ReLU networks (one input, one output, thus only one layer of ReLU units). The work is interesting in the sense that it does not involve some unrealistic assumptions used by previous works with similar goal. Most importantly, this work does not assume independence between input and activations, and it does not rely on noise injection (which can simplify the analysis). Nonetheless, removing these simplifying assumptions comes at the expense of limiting the analysis to:\n1. Only one layer of nonlinear units\n2. Discarding the bias term in ReLU while keeping the input Gaussian (thus constant input trick cannot be used to simulate the bias term).\n3. Imposing strong assumption on the representation on the input/output via (bias-less) ReLU networks: existence of orthonormal bases to represent this relationships.\n\nHaving that said, as far as I can tell, the paper presents original analysis in this new setting, which is interesting and valuable. For example, by exploiting the symmetry in the problem under the assumption 3 I listed above, the authors are able to reduce the high-dimensional dynamics of the gradient descent to a bivariate dynamics (instead of dealing with original size of the parameters). Such reduction to 2D allows the author to rigorously analyze the behavior of the dynamics (e.g. convergence to a saddle point in symmetric case, or to the optimum in non-symmetric case).\n\nClarification Needed: first paragraph of page 2. Near the end of the paragraph you say \"Initialization can be arbitrarily close to origin\", but at the beginning of the same paragraph you state \"initialized randomly with standard deviation of order 1/sqrt(d)\". Aren't these inconsistent?\n\nSome minor comments about the draft:\n1. In section 1, 2nd paragraph: \"We assume x is Gaussian and thus the network is bias free\". Do you mean \"zero-mean\" Gaussian then?\n2. \"standard deviation\" is spelled \"standard derivation\" multiple times in the paper.\n3. Page 6, last paragraph, first line: Corollary 4.1 should be Corollary 4.2\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=BkxN0nr4l
SJVUCuuNg
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Potentially new analysis, but hard to read", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice. \n\nThe paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually. \n\nOverall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=SJVUCuuNg
HkAvHKxNl
Hk85q85ee
ICLR.cc/2017/conference/-/paper316/official/review
{"title": "Hard to read paper; unclear conclusions.", "rating": "4: Ok but not good enough - rejection", "review": "In this paper, the author analyzes the convergence dynamics of a single layer non-linear network under Gaussian iid input assumptions. The first half of the paper, dealing with a single hidden node, was somewhat clear, although I have some specific questions below. The second half, dealing with multiple hidden nodes, was very difficult for me to understand, and the final \"punchline\" is quite unclear. I think the author should focus on intuition and hide detailed derivations and symbols in an appendix. \n\nIn terms of significance, it is very hard for me to be sure how generalizable these results are: the Gaussian assumption is a very strong one, and so is the assumption of iid inputs. Real-world feature inputs are highly correlated and are probably not Gaussian. Such assumptions are not made (as far as I can tell) in recent papers analyzing the convergence of deep networks e.g. Kawaguchi, NIPS 2016. Although the author says the no assumption is made on the independence of activations, this assumption is shifted to the input instead. I think this means that the activations are combinations of iid random variables, and are probably Gaussian like, right? So I'm not sure where this leaves us.\n\nSpecific comments:\n\n1. Please use D_w instead of D to show that D is a function of w, and not a constant. This gets particularly confusing when switching to D(w) and D(e) in Section 3. In general, notation in the paper is hard to follow and should be clearly introduced.\n\n2. Section 3, statement that says \"when the neuron is cut off at sample l, then (D^(t))_u\" what is the relationship between l and u? Also, this is another example of notational inconsistency that causes problems to the reader.\n\n3. Section 3.1, what is F(e, w) and why is D(e) introduced? This was unclear to me.\n\n4. Theorem 3.3 suggests that (if \\epsilon is > 0), then to have the maximal probability of convergence, \\epsilon should be very close to 0, which means that the ball B_r has radius r -> 0? This seems contradictory from Figure 2. \n\n5. Section 4 was really unclear and I still do not understand what the symmetry group really represents. Is there an intuitive explanation why this is important?\n\n6. Figure 5: what is a_j ?\n\nI encourage the author to rewrite this paper for clarity. In it's present form, it would be very difficult to understand the takeaways from the paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
["Yuandong Tian"]
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of $g(x; w) = \sum_{j=1}^K \sigma(w_j \cdot x)$, where $\sigma(\cdot)$ is ReLU nonlinearity. We assume that the input $x$ follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters $w*$ using $l_2$ loss. We first show that when $K = 1$, the nonlinear dynamics can be written in close form, and converges to $w*$ with at least $(1-\epsilon)/2$ probability, if random weight initializations of proper standard derivation ($\sim 1/\sqrt{d}$) is used, verifying empirical practice. For networks with many ReLU nodes ($K \ge 2$), we apply our close form dynamics and prove that when the teacher parameters $\{w*_j\}_{j=1}^K$ forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to $w*$ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with $l_2$ loss. Simulations verify our theoretical analysis.
["Theory", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hk85q85ee
https://openreview.net/pdf?id=Hk85q85ee
https://openreview.net/forum?id=Hk85q85ee&noteId=HkAvHKxNl
rkCS99SVl
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "official review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper proposes to use reinforcement learning to learn how to compose the words in a sentence, i.e. parse tree, that can be helpful for the downstream tasks. To do that, the shift-reduce framework is employed and RL is used to learn the policy of the two actions SHIFT and REDUCE. The experiments on four datasets (SST, SICK, IMDB, and SNLI) show that the proposed approach outperformed the approach using predefined tree structures (e.g. left-to-right, right-to-left). \n\nThe paper is well written and has two good points. Firstly, the idea of using RL to learn parse trees using downstream tasks is very interesting and novel. And employing the shift-reduce framework is a very smart choice because the set of actions is minimal (shift and reduce). Secondly, what shown in the paper somewhat confirms the need of parse trees. This is indeed interesting because of the current debate on whether syntax is helpful.\n\nI have the following comments:\n- it seems that the authors weren't aware of some recent work using RL to learn structures for composition, e.g. Andreas et al (2016).\n- because different composition functions (e.g. LSTM, GRU, or classical recursive neural net) have different inductive biases, I was wondering if the tree structures found by the model would be independent from the composition function choice.\n- because RNNs in theory are equivalent to Turing machines, I was wondering if restricting the expressiveness of the model (e.g. reducing the dimension) can help the model focus on discovering more helpful tree structures.\n\nRef:\nAndreas et al. Learning to Compose Neural Networks for Question Answering. NAACL 2016", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=rkCS99SVl
r19SqUiNe
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "Accept", "rating": "7: Good paper, accept", "review": "I have not much to add to my pre-review comments.\nIt's a very well written paper with an interesting idea.\nLots of people currently want to combine RL with NLP. It is very en vogue.\nNobody has gotten that to work yet in any really groundbreaking or influential way that results in actually superior performance on any highly relevant or competitive NLP task.\nMost people struggle with the fact that NLP requires very efficient methods on very large datasets and RL is super slow.\nHence, I believe this direction hasn't shown much promise yet and it's not yet clear it ever will due to the slowness of RL.\nBut many directions need to be explored and maybe eventually they will reach a point where they become relevant.\n\nIt is interesting to learn the obviously inherent grammatical structure in language though sadly again, the trees here do not yet capture much of what our intuitions are.\n\nRegardless, it's an interesting exploration, worthy of being discussed at the conference.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=r19SqUiNe
B1OyMaWNg
Skvgqgqxe
ICLR.cc/2017/conference/-/paper164/official/review
{"title": "Weak experimental results", "rating": "6: Marginally above acceptance threshold", "review": "In this paper, the authors propose a new method to learn hierarchical representations of sentences, based on reinforcement learning. They propose to learn a neural shift-reduce parser, such that the induced tree structures lead to good performance on a downstream task. They use reinforcement learning (more specifically, the policy gradient method REINFORCE) to learn their model. The reward of the algorithm is the evaluation metric of the downstream task. The authors compare two settings, (1) no structure information is given (hence, the only supervision comes from the downstream task) and (2) actions from an external parser is used as supervision to train the policy network, in addition to the supervision from the downstream task. The proposed approach is evaluated on four tasks: sentiment analysis, semantic relatedness, textual entailment and sentence generation.\n\nI like the idea of learning tree representations of text which are useful for a downstream task. The paper is clear and well written. However, I am not convinced by the experimental results presented in the paper. Indeed, on most tasks, the proposed model is far from state-of-the-art models:\n - sentiment analysis, 86.5 v.s. 89.7 (accuracy);\n - semantic relatedness, 0.32 v.s. 0.25 (MSE);\n - textual entailment, 80.5 v.s. 84.6 (accuracy).\nFrom the results presented in the paper, it is hard to know if these results are due to the model, or because of the reinforcement learning algorithm.\n\nPROS:\n - interesting idea: learning structures of sentences adapted for a downstream task.\n - well written paper.\nCONS:\n - weak experimental results (do not really support the claim of the authors).\n\nMinor comments:\nIn the second paragraph of the introduction, one might argue that bag-of-words is also a predominant approach to represent sentences.\nParagraph titles (e.g. in section 3.2) should have a period at the end.\n\n----------------------------------------------------------------------------------------------------------------------\nUPDATE\n\nI am still not convinced by the results presented in the paper, and in particular by the fact that one must combine the words in a different way than left-to-right to obtain state of the art results.\nHowever, I do agree that this is an interesting research direction, and that the results presented in the paper are promising. I am thus updating my score from 5 to 6.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Compose Words into Sentences with Reinforcement Learning
["Dani Yogatama", "Phil Blunsom", "Chris Dyer", "Edward Grefenstette", "Wang Ling"]
We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models, in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.
["words", "sentences", "reinforcement", "reinforcement learning", "neural networks", "representations", "natural language sentences", "contrast", "prior work", "models"]
https://openreview.net/forum?id=Skvgqgqxe
https://openreview.net/pdf?id=Skvgqgqxe
https://openreview.net/forum?id=Skvgqgqxe&noteId=B1OyMaWNg
BJ_0DiWNx
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "Limited theoretical novelty and evaluation", "rating": "5: Marginally below acceptance threshold", "review": "Authors show that a contrastive loss for a Siamese architecture can be used for learning representations for planar curves. With the proposed framework, authors are able to learn a representation which is comparable to traditional differential or integral invariants, as evaluated on few toy examples.\n\nThe paper is generally well written and shows an interesting application of the Siamese architecture. However, the experimental evaluation and the results show that these are rather preliminary results as not many of the choices are validated. My biggest concern is in the choice of the negative samples, as the network basically learns only to distinguish between shapes at different scales, instead of recognizing different shapes. It is well known fact that in order to achieve a good performance with the contrastive loss, one has to be careful about the hard negative sampling, as using too easy negatives may lead to inferior results. Thus, this may be the underlying reason for such choice of the negatives? Unfortunately, this is not discussed in the paper.\n\nFurthermore the paper misses a more thorough quantitative evaluation and concentrates more on showing particular examples, instead of measuring more robust statistics over multiple curves (invariance to noise and sampling artifacts).\n\nIn general, the paper shows interesting first steps in this direction, however it is not clear whether the experimental section is strong and thorough enough for the ICLR conference. Also the novelty of the proposed idea is limited as Siamese networks are used for many years and this work only shows that they can be applied to a different task.", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=BJ_0DiWNx
HJehdh-4e
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "filling a much needed gap?", "rating": "6: Marginally above acceptance threshold", "review": "I'm torn on this one. Seeing the MPEG-7 dataset and references to curvature scale space brought to mind the old saying that \"if it's not worth doing, it's not worth doing well.\" There is no question that the MPEG-7 dataset/benchmark got saturated long ago, and it's quite surprising to see it in a submission to a modern ML conference. I brought up the question of \"why use this representation\" with the authors and they said their \"main purpose was to connect the theory of differential geometry of curves with the computational engine of a convolutional neural network.\" Fair enough. I agree these are seemingly different fields, and the authors deserve some credit for connecting them. If we give them the benefit of the doubt that this was worth doing, then the approach they pursue using a Siamese configuration makes sense, and their adaptation of deep convnet frameworks to 1D signals is reasonable. To the extent that the old invariant based methods made use of smoothed/filtered representations coupled with nonlinearities, it's sensible to revisit this problem using convnets. I wouldn't mind seeing this paper accepted, since it's different from the mainstream, but I worry about there being too narrow an audience at ICLR that still cares about this type of shape representation.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=HJehdh-4e
B10ljK-Nl
BymIbLKgl
ICLR.cc/2017/conference/-/paper97/official/review
{"title": "An interesting representation", "rating": "8: Top 50% of accepted papers, clear accept", "review": "Pros : \n- New representation with nice properties that are derived and compared with a mathematical baseline and background\n- A simple algorithm to obtain the representation\n\nCons :\n- The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Invariant Representations Of Planar Curves
["Gautam Pai", "Aaron Wetzler", "Ron Kimmel"]
We propose a metric learning framework for the construction of invariant geometric functions of planar curves for the Euclidean and Similarity group of transformations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we develop a novel multi-scale representation in a similarity metric learning paradigm.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=BymIbLKgl
https://openreview.net/pdf?id=BymIbLKgl
https://openreview.net/forum?id=BymIbLKgl&noteId=B10ljK-Nl
Ske_zvGNl
rJ8Je4clg
ICLR.cc/2017/conference/-/paper202/official/review
{"title": "Intriguing idea, but lacking theoretical and empirical validation", "rating": "4: Ok but not good enough - rejection", "review": "In this paper, a Q-Learning variant is proposed that aims at \"propagating\" rewards faster by adding extra costs corresponding to bounds on the Q function, that are based on both past and future rewards. This leads to faster convergence, as shown on the Atari Learning Environment benchmark.\n\nThe paper is well written and easy to follow. The core idea of using relaxed inequality bounds in the optimization problem is original to the best of my knowledge, and results seem promising.\n\nThis submission however has a number of important shortcomings that prevent me from recommending it for publication at ICLR:\n\n1. The theoretical justification and analysis is very limited. As far as I can tell the bounds as defined require a deterministic reward to hold, which is rarely the case in practice. There is also the fact that the bounds are computed using the so-called \"target network\" with different parameters theta-, which is another source of discrepancy. And even before that, the bounds hold for Q* but are applied on Q for which they may not be valid until Q gets close enough to Q*. It also looks weird to take the max over k in (1, ..., K) when the definition of L_j,k makes it look like the max has to be L_j,1 (or even L_j,0, but I am not sure why that one is not considered), since L*_j,0 >= L*_j,1 >= ... >= L*_j,K. Neither of these issues are discussed in the paper, and there is no theoretical analysis of the convergence properties of the proposed method.\n\n[Update: some of these concerns were addressed in OpenReview comments]\n\n2. The empirical evaluation does not compensate, in my opinion, for the lack of theory. First, since there are two bounds introduced, I would have expected \"ablative\" experiments showing the improvement brought by each one independently. It is also unfortunate that the authors did not have time to let their algorithm run longer, since as shown in Fig. 1 there remain a significant amount of games where it performs worse compared to DQN. In addition, comparisons are limited to vanilla DQN and DDQN: I believe it would have been important to compare to other ways of incorporating longer-term rewards, like n-step Q-Learning or actor-critic. Finally, there is no experiment demonstrating that the proposed algorithm can indeed improve other existing DQN variants: I agree with the author when they say \"We believe that our method can be readily combined with other techniques developed for DQN\", however providing actual results showing this would have made the paper much stronger.\n\nIn conclusion, I do believe this line of research is worth pursuing, but also that additional work is required to really prove and understand its benefits.\n\nMinor comments:\n- Instead of citing the arxiv Wang et al (2015), it would be best to cite the 2016 ICML paper\n- The description of Q-Learning in section 3 says \"The estimated future reward is computed based on the current state s or a series of past states s_t if available.\" I am not sure what you mean by \"a series of past states\", since Q is defined as Q(s, a) and thus can only take the current state s as input, when defined this way.\n- The introduction of R_j in Alg. 1 is confusing since its use is only explained later in the text (in section 5 \"In addition, we also incorporate the discounted return R_j in the lower bound calculation to further stabilize the training\")\n- In Fig. S1 the legend should not say \"10M\" since the plot is from 1M to 10M", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
https://openreview.net/forum?id=rJ8Je4clg
https://openreview.net/pdf?id=rJ8Je4clg
https://openreview.net/forum?id=rJ8Je4clg&noteId=Ske_zvGNl
BJhbTXKEx
rJ8Je4clg
ICLR.cc/2017/conference/-/paper202/official/review
{"title": "Review", "rating": "9: Top 15% of accepted papers, strong accept", "review": "In this paper, the authors proposed a extension to the DQN algorithm by introducing both an upper and lower bound to the optimal Q function. The authors show experimentally that this approach improves the data efficiency quite dramatically such that they can achieve or even supersede the performance of DQN that is trained in 8 days. \n\nThe idea is novel to the best of my knowledge and the improvement over DQN seems very significant. \n\nRecently, Remi et al have introduced the Retrace algorithm which can make use of multi-step returns to estimate Q values. As I suspect, some of the improvements that comes from the bounds is due to the fact that multi-step returns is used effectively. Therefore, I was wondering whether the authors have tried any approach like Retrace or Tree backup by Precup et al. and if so how do these methods stack up against the proposed method.\n\nThe author have very impressive results and the paper proposes a very promising direction for future research and as a result I would like to make a few suggestions:\n\nFirst, it would be great if the authors could include a discussion about deterministic vs stochastic MDPs. \n\nSecond, it would be great if the authors could include some kind of theoretically analysis about the approach.\n\nFinally, I would like to apologize for the late review.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
https://openreview.net/forum?id=rJ8Je4clg
https://openreview.net/pdf?id=rJ8Je4clg
https://openreview.net/forum?id=rJ8Je4clg&noteId=BJhbTXKEx
SJ8uwSGVx
rJ8Je4clg
ICLR.cc/2017/conference/-/paper202/official/review
{"title": "review", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper proposes an improvement to the q-learning/DQN algorithm using constraint bounds on the q-function, which are implemented using quadratic penalties in practice. The proposed change is simple to implement and remarkably effective, enabling both significantly faster learning and better performance on the suite of Atari games.\n\nI have a few suggestions for improving the paper:\nThe paper could be improved by including qualitative observations of the learning process with and without the proposed penalties, to better understand the scenarios in which this method is most useful, and to develop a better understanding of its empirical performance.\n\nIt would also be nice to include zoomed-out versions of the learned curves in Figure 3, as the DQN has yet to converge. Error bars would also be helpful to judge stability over different random seeds.\n\nAs mentioned in the paper, this method could be combined with D-DQN. It would be interesting to see this combination, to see if the two are complementary. Do you plan to do this in the final version?\n\nAlso, a couple questions:\n- Do you think the performance of this method would continue to improve after 10M frames?\n- Could the ideas in this paper be extended to methods for continuous control like DDPG or NAF?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening
["Frank S.He", "Yang Liu", "Alexander G. Schwing", "Jian Peng"]
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.
["Reinforcement Learning", "Optimization", "Games"]
https://openreview.net/forum?id=rJ8Je4clg
https://openreview.net/pdf?id=rJ8Je4clg
https://openreview.net/forum?id=rJ8Je4clg&noteId=SJ8uwSGVx
S1nGIQ-Vl
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "O(mn)?", "rating": "4: Ok but not good enough - rejection", "review": "L-SR1 seems to have O(mn) time complexity. I miss this information in your paper. \nYour experimental results suggest that L-SR1 does not outperform Adadelta (I suppose the same for Adam). \nGiven the time complexity of L-SR1, the x-axis showing time would suggest that L-SR1 is much (say, m times) slower. \n\"The memory size of 2 had the lowest minimum test loss over 90\" suggests that the main driven force of L-SR1 \nwas its momentum, i.e., the second-order information was rather useless.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=S1nGIQ-Vl
rk3f2SyVg
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "Address better optimization at saddle points with symmetric rank-one method which does not guarantee pos. def. update matrix, vs. BFGS approach. Investigating this optimization with limited memory version or SR1", "rating": "5: Marginally below acceptance threshold", "review": "It is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian. From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=rk3f2SyVg
SyNjWlG4x
By1snw5gl
ICLR.cc/2017/conference/-/paper435/official/review
{"title": "Interesting work, but not ready to be published", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes a new second-order method L-SR1 to train deep neural networks. It is claimed that the method addresses two important optimization problems in this setting: poor conditioning of the Hessian and proliferation of saddle points. The method can be viewed as a concatenation of SR1 algorithm of Nocedal & Wright (2006) and limited-memory representations Byrd et al. (1994). First of all, I am missing a more formal, theoretical argument in this work (in general providing more intuition would be helpful too), which instead is provided in the works of Dauphin (2014) or Martens. The experimental section in not very convincing considering that the performance in terms of the wall-clock time is not reported and the advantage over some competitor methods is not very strong even in terms of epochs. I understand that the authors are optimizing their implementation still, but the question is: considering the experiments are not convincing, why would anybody bother to implement L-SR1 to train their deep models? The work is not ready to be published.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
L-SR1: A Second Order Optimization Method for Deep Learning
["Vivek Ramamurthy", "Nigel Duffy"]
We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Furthermore, we perform an experimental analysis of L-SR1 with respect to its hyper-parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.
["second order optimization", "deep neural networks", "distributed training", "deep", "deep learning", "new second order", "second order methods", "great promise", "deep networks", "practical"]
https://openreview.net/forum?id=By1snw5gl
https://openreview.net/pdf?id=By1snw5gl
https://openreview.net/forum?id=By1snw5gl&noteId=SyNjWlG4x
B17yL74He
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Poor performance on bioinformatics dataset?", "rating": "5: Marginally below acceptance threshold", "review": "the paper proposed a method mainly for graph classification. The proposal is to decompose graphs objects into hierarchies of small graphs followed by generating vector embeddings and aggregation using deep networks. \nThe approach is reasonable and intuitive however, experiments do not show superiority of their approach. \n\nThe proposed method outperforms Yanardag et al. 2015 and Niepert et al., 2016 on social networks graphs but are quite inferior to Niepert et al., 2016 on bio-informatics datasets. the authors did not report acccuracy for Yanardag et al. 2015 which on similar bio-ddatasets for example NCI1 is 80%, significantly better than achieved by the proposed method. The authors claim that their method is tailored for social networks graph more is not supported by good arguments? what models of graphs is this method more suitable? ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=B17yL74He
r1xXahBNl
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Interesting approach, confusing presentation.", "rating": "5: Marginally below acceptance threshold", "review": "The paper contributes to recent work investigating how neural networks can be used on graph-structured data. As far as I can tell, the proposed approach is the following:\n\n 1. Construct a hierarchical set of \"objects\" within the graph. Each object consists of multiple \"parts\" from the set of objects in the level below. There are potentially different ways a part can be part of an object (the different \\pi labels), which I would maybe call \"membership types\". In the experiments, the objects at the bottom level are vertices. At the next level they are radius 0 (just a vertex?) and radius 1 neighborhoods around each vertex, and the membership types here are either \"root\", or \"element\" (depending on whether a vertex is the center of the neighborhood or a neighbor). At the top level there is one object consisting of all of these neighborhoods, with membership types of \"radius 0 neighborhood\" (isn't this still just a vertex?) or \"radius 1 neighborhood\".\n\n 2. Every object has a representation. Each vertex's representation is a one-hot encoding of its degree. To construct an object's representation at the next level, the following scheme is employed:\n\n a. For each object, sum the representation of all of its parts having the same membership type.\n b. Concatenate the sums obtained from different membership types.\n c. Pass this vector through a multi-layer neural net.\n\nI've provided this summary mainly because the description in the paper itself is somewhat hard to follow, and relevant details are scattered throughout the text, so I'd like to verify that my understanding is correct.\n\nSome additional questions I have that weren't clear from the text: how many layers and hidden units were used? What are the dimensionalities of the representations used at each layer? How is final classification performed? What is the motivation for the chosen \"ego-graph\" representation? \n\nThe proposed approach is interesting and novel, the compression technique appears effective, and the results seem compelling. However, the clarity and structure of the writing is quite poor. It took me a while to figure out what was going on---the initial description is provided without any illustrative examples, and it required jumping around the paper to figure for example how the \\pi labels are actually used. Important details around network architecture aren't provided, and very little in the way of motivation is given for many of the choices made. Were other choices of decomposition/object-part structures investigated, given the generality of the shift-aggregate-extract formulation? What motivated the choice of \"ego-graphs\"? Why one-hot degrees for the initial attributes?\n\nOverall, I think the paper contains a useful contribution on a technical level, but the presentation needs to be significantly cleaned up before I can recommend acceptance.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=r1xXahBNl
SJP14kfEx
S1Y0td9ee
ICLR.cc/2017/conference/-/paper461/official/review
{"title": "Might be something good here, but key details are missing.", "rating": "3: Clear rejection", "review": "Some of the key details in this paper are very poorly explained or not even explained at all. The model sounds interesting and there may be something good here, but it should not be published in it's current form. \n\nSpecific comments:\n\nThe description of the R_l,pi convolutions in Section 2.1 was unclear. Specifically, I wasn't confident that I understood what the labels pi represented.\n\nThe description of the SAEN structure in section 2.2 was worded poorly. My understanding, based on Equation 1, is that the 'shift' operation is simply a summation of the representations of the member objects, and that the 'aggregate' operation simply concatenates the representations from multiple relations. In the 'shift' step, it seems more appropriate to average over the object's member's representations h_j, rather than sum over them.\n\nThe compression technique presented in Section 2.3 requires that multiple objects at a level have the same representation. Why would this ever occur, given that the representations are real valued and high-dimensional? The text is unintelligible: \"two objects are equivalent if they are made by same sets of parts for all the pi-parameterizations of the R_l,pi decomposition relation.\" \n\nThe 'ego graph patterns' in Figure 1 and 'Ego Graph Neural Network' used in the experiments are never explained in the text, and no references are given. Because of this, I cannot comment on the quality of the experiments.", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Shift Aggregate Extract Networks
["Francesco Orsini", "Daniele Baracchi", "Paolo Frasconi"]
The Shift Aggregate Extract Network SAEN is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying 'shift', 'aggregate' and 'extract' operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
["Supervised Learning"]
https://openreview.net/forum?id=S1Y0td9ee
https://openreview.net/pdf?id=S1Y0td9ee
https://openreview.net/forum?id=S1Y0td9ee&noteId=SJP14kfEx
B1-0khZEl
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "Very interesting results, but more details and more quantitative results are needed", "rating": "6: Marginally above acceptance threshold", "review": "\nThis paper proposes the beta-VAE, which is a reasonable but also straightforward generalization of the standard VAE. In particular, a weighting factor beta is added for the KL-divergence term to balance the likelihood and KL-divergence. Experimental results show that tuning this weighting factor is important for learning disentangled representations. A linear-classifier based protocol is proposed for measuring the quality of disentanglement. Impressive illustrations on manipulating latent variables are shown in the paper. \n\nLearning disentangled representations without supervision is an important topic. Showing the effectiveness of VAE for this task is interesting. Generalizing VAE with a weighting factor is straightforward (though reformulating VAE is also interesting), the main contribution of this paper is on the empirical side. \n\nThe proposed protocol for measuring disentangling quality is reasonable. Establishing protocol is one important methodology contribution of this paper, but the presentation of Section 3 is still not good. Little motivation is provided at the beginning of Section 3. Figure 2 is a summary of the algorithm, which is helpful, but it still necessary to intuitively explain the motivation at the first place (e.g., what you expect if a factor is disentangled, and why the performance of a classifier can reflect such an expectation). Moreover, 1) z_diff appeared without any definition in the main text. 2) Use \u201cdecoding\u201d for x~Sim(v,w) may make people confuse the ground truth sampling procedure w ith the trained decoder. \n\nThe illustrative figures on traversing the disentangled factor are impressive, though image generation quality is not as good as InfoGAN (not the main point of this paper). However, 1) it will be helpful to discuss if the good disentangling quality only attribute to the beta factor and VAE framework. For example, the training data in this paper seems to be densely sampled for the visualized factors. Does the sampling density play a critical role? 2) Not too many qualitative results are provided for each experiment? Adding more figures (e.g., in appendix) to cover more factors and seeding images can strength the conclusions drawn in this paper. 3) Another detailed question related to the generalizability of the model: are the seeding image for visualizing faces from unseen subjects or subjects in the training set? (maybe I missed something here.)\n\nQuantitative results are only presented for the synthesized 2D shape. What hinders this paper from reporting quantitative numbers on real data (e.g., the 2D and 3D face data)? One possible reason is that not all factors can be disentangled for real data, but it is still feasible to pick up some well-defined factor to measure the quantitative performance. \n\nQuantitative performance is only measured by the proposed protocol. Since the effectiveness of the protocol is something the paper need to justify, reporting quantitative results using simpler protocol is helpful both for demonstrating the disentangling quality and for justifying the proposed protocol (consistency with other measurement). A simple experiment is facial identity recognition and pose estimation using disentangled features on a standard test set (like in Reed et al, ICML 2014). \n\nIn Figure 6 (left), why ICA is worse than PCA for disentanglement? Is it due to the limitation of the ICA algorithm or some other reasons? \n\nIn Figure 6 (right), what is \u201cfactor change accuracy\u201d? According to Appendix A.4 (which is not referred to in the main text), it is the \u201cDisentanglement metric score\u201d. Is that right?\nIf so Figure 6 (right) shows the reconstruction results for the best disentanglement metric score. Then, 1) how about random generation or traversing along a disentangled factor? 2) more importantly, how is the reconstruction/generation results when the disentanglement metric score is suboptimal. \n\nOverall, the results presented in this paper are very interesting, but there are many details to be clarified. Moreover, more quantitative results are also needed. I hope at least some of the above concerns can be addressed. \n\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=B1-0khZEl
H16z7IT4l
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "The paper proposes beta-VAE which strengthen the KL divergence between the recognition model and the prior to limit the capacity of latent variables while sacrificing the reconstruction error. This allows the VAE model to learn more disentangled representation. \n\nThe main concern is that the paper didn't present any quantitative result on log likelihood estimation. On the quality of generated samples, although the beta-VAE learns disentangled representation, the generated samples are not as realistic as those based on generative adversarial network, e.g., InfoGAN. Beta-VAE learns some interpretable factors of variation, but it still remains unclear why it is a better (or more efficient) representation than that of standard VAE.\n\nIn experiment, what is the criteria for cross-validation on hyperparameter \\beta?\n\nThere also exists other ways to limit the capacity of the model. The simplest way is to reduce the latent variable dimension. I am wondering how the proposed beta-VAE is a better model than the VAE with reduced, or optimal latent variable dimension.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=H16z7IT4l
HyRZoSLVe
Sy2fzU9gl
ICLR.cc/2017/conference/-/paper291/official/review
{"title": "Simple and effective", "rating": "7: Good paper, accept", "review": "Summary\n===\n\nThis paper presents Beta-VAE, an augmented Variational Auto-Encoder which\nlearns disentangled representations. The VAE objective is derived\nas an approximate relaxation of a constrained optimization problem where\nthe constraint matches the latent code of the encoder to a prior.\nWhen KKT multiplier beta on this constraint is set to 1 the result is the\noriginal VAE objective, but when beta > 1 we obtain Beta-VAE, which simply\nincreases the penalty on the KL divergence term. This encourages the model to\nlearn a more efficient representation because the capacity of the latent\nrepresentation is more limited by beta. The distribution of the latent\nrepresentation is rewarded more when factors are independent because\nthe prior (an isotropic Gaussian) encourages independent factors, so the\nrepresentation should also be disentangled.\n\nA new metric is proposed to evaluate the degree of disentanglement. Given\na setting in which some disentangled latent factors are known, many examples\nare generated which differ in all of these factors except one. These examples\nare encoded into the learned latent representation and a simple classifier\nis used to predict which latent factor was kept constant. If the learned\nrepresentation does not disentangle the constant factor then the classifier\nwill more easily confuse factors and its accuracy will be lower. This\naccuracy is the final number reported.\n\nA synthetic dataset of 2D shapes with known latent factors is created to\ntest the proposed metric and Beta-VAE outperforms a number of baselines\n(notably InfoGAN and the semi-supervised DC-IGN).\n\nQualitative results show that Beta-VAE learns disentangled factors\non the 3D chairs dataset, a dataset of 3D faces, and the celebA dataset\nof face images. The effect of varying Beta is also evaluated using the proposed\nmetric and the latent factors learned on the 2D shapes dataset are explored\nin detail.\n\n\nStrengths\n===\n* Beta-VAE is simple and effective.\n\n* The proposed metric is a novel way of testing whether ground truth factors\nof variation have been identified.\n\n* There is extensive comparison to relevant baselines.\n\n\nWeaknesses\n===\n\n* Section 3 describes the proposed disentanglement metric, however I feel\nI need to read the caption of the associated figure (I thank for adding\nthat) and Appendix 4 to understand the metric intuitively or in detail.\nIt would be easier to read this section if a clear intuition preceeded\na detailed description and I think more space should be devoted to this\nin the paper.\n\n* Appendix 4: Why was the bottom 50% of the resulting scores discarded?\n\n* As indicated in pre-review comments, the disentanglement metric is similar\nto a measure of correlation between latent features. Could the proposed metric\nbe compared to a direct measure of cross-correlation between latent factors\nestimated over the 2D shapes dataset?\n\n\n* The end of section 4.2 observes that high beta values result in low\ndisentanglement, which suggests the most efficient representation is not\ndisentangled. This seems to disagree with the intuition from the approach\nsection that more efficient representations should be disentangled. It would\nbe nice to see discussion of potential reasons for this disagreement.\n\n* The writing is somewhat dense.\n\n\nOverall Evaluation\n===\nThe core idea is novel, simple and extensive tests show that it is effective.\nThe proposed evaluation metric is novel might come into broader use.\nThe main downside to the current version of this paper is the presentation,\nwhich provides sufficient detail but could be more clear.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
["Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner"]
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
["constrained variational framework", "framework", "beta", "infogan", "data", "interpretable factorised representation", "world", "supervision"]
https://openreview.net/forum?id=Sy2fzU9gl
https://openreview.net/pdf?id=Sy2fzU9gl
https://openreview.net/forum?id=Sy2fzU9gl&noteId=HyRZoSLVe
Hyq3zhbVg
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Review", "rating": "3: Clear rejection", "review": "The paper proposes a model that aims at learning to label nodes of graph in a semi-supervised setting. The idea of the model is based on the use of the graph structure to regularize the representations learned at the node levels. Experimental results are provided on different tasks\n\nThe underlying idea of this paper (graph regularization) has been already explored in different papers \u2013 e.g 'Learning latent representations of nodes for classifying in heterogeneous social networks' [Jacob et al. 2014], [Weston et al 2012] where a real graph structure is used instead of a built one. The experiments lack of strong comparisons with other graph models (e.g Iterative Classification, 'Learning from labeled and unlabeled data on a directed graph', ...). So the novelty of the paper and the experimental protocol are not strong enough to accpet the paper.\n\nPros:\n* Learning over graph is an important topic\n\nCons:\n* Many existing approaches have already exploited the same types of ideas, resulting in very close models\n* Lack of comparison w.r.t existing models\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=Hyq3zhbVg
SkitQvmNl
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Very similar to previous work, rebranded.", "rating": "3: Clear rejection", "review": "The authors introduce a semi-supervised method for neural networks, inspired from label propagation.\n\nThe method appears to be exactly the same than the one proposed in (Weston et al, 2008) (the authors cite the 2012 paper). The optimized objective function in eq (4) is exactly the same than eq (9) in (Weston et al, 2008).\n\nAs possible novelty, the authors propose to use the adjacency matrix as input to the neural network, when there are no other features, and show success on the BlogCatalog dataset.\n\nExperiments on text classification use neighbors according to word2vec average embedding to build the adjacency matrix. Top reported accuracies are not convincing compared to (Zhang et al, 2015) reported performance. Last experiment is on semantic intent classification, which a custom dataset; neighbors are also found according to a word2vec metric.\n\nIn summary, the paper propose few applications to the original (Weston et al, 2008) paper. It rebrands the algorithm under a new name, and does not bring any scientific novelty, and the experimental section lacks existing baselines to be convincing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=SkitQvmNl
BJofT1mNg
SJg498clg
ICLR.cc/2017/conference/-/paper310/official/review
{"title": "Very similar to previous work.", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposes the Neural Graph Machine that adds in graph regularization on neural network hidden representations to improve network learning and take the graph structure into account. The proposed model, however, is almost identical to that of Weston et al. 2012.\n\nAs the authors have clarified in the answers to the questions, there are a few new things that previous work did not do:\n\n1. they showed that graph augmented training for a range of different types of networks, including FF, CNN, RNNs etc. and works on a range of problems.\n2. graphs help to train better networks, e.g. 3 layer CNN with graphs does as well as than 9 layer CNNs\n3. graph augmented training works on a variety of different kinds of graphs.\n\nHowever, all these points mentioned above seems to simply be different applications of the graph augmented training idea, and observations made during the applications. I think it is therefore not proper to call the proposed model a novel model with a new name Neural Graph Machine, but rather making it clear in the paper that this is an empirical study of the model proposed by Weston et al. 2012 to different problems would be more acceptable.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Graph Machines: Learning Neural Networks Using Graphs
["Thang D. Bui", "Sujith Ravi", "Vivek Ramavajjala"]
Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs. The proposed method is experimentally validated on a wide range of tasks (multi- label classification on social graphs, news categorization and semantic intent classification) using different architectures (NNs, CNNs, and LSTM RNNs).
["Semi-Supervised Learning", "Natural language processing", "Applications"]
https://openreview.net/forum?id=SJg498clg
https://openreview.net/pdf?id=SJg498clg
https://openreview.net/forum?id=SJg498clg&noteId=BJofT1mNg
HJsxV1GVx
B16dGcqlx
ICLR.cc/2017/conference/-/paper531/official/review
{"title": "Interesting idea for imitation learning. Paper could have been more general. ", "rating": "6: Marginally above acceptance threshold", "review": "The paper presents an interesting new problem setup for imitation learning: an agent tries to imitate a trajectory demonstrated by an expert but said trajectory is demonstrated in a different state or observation space than the one accessible by the agent (although the dynamics of the underlying MDP are shared). The paper proposes a solution strategy that combines recent work on domain confusion losses with a recent IRL method based on generative adversarial networks.\n\nI believe the general problem to be relevant and agree with the authors that it results in a more natural formulation for imitation learning that might be more widely applicable.\nThere are however a few issues with the paper in its current state that make the paper fall short of being a great exploration of a novel idea. I will list these concerns in the following (in arbitrary order)\n- The paper feels at times to be a bit hurriedly written (this also mainly manifests itself in the experiments, see comment below) and makes a few fairly strong claims in the introduction that in my opinion are not backed up by their approach. For example: \"Advancements in this class of algorithms would significantly improve the state of robotics, because it will enable anyone to easily teach robots new skills\"; given that the current method to my understanding has the same issues that come with standard GAN training (e.g. instability etc.) and requires a very accurate simulator to work well (since TRPO will require a large number of simulated trajectories in each step) this seems like an overstatement.\n There are some sentences that are ungrammatical or switch tense in the middle of the sentence making the paper harder to read than necessary, e.g. Page 2: \"we find that this simple approach has been able to solve the problems\"\n- The general idea of third person imitation learning is nice, clear and (at least to my understanding) also novel. However, instead of exploring how to generally adapt current IRL algorithms to this setting the authors pick a specific approach that they find promising (using GANs for IRL) and extend it. A significant amount of time is then spent on explaining why current IRL algorithms will fail in the third-person setting. I fail to see why the situation for the GAN based approach is any different than that of any other existing IRL algorithm. To be more clear: I see no reason why e.g. behavioral cloning could not be extended with a domain confusion loss in exactly the same way as the approach presented. To this end it would have been nice to rather discuss which algorithms can be adapted in the same way (and also test them) and which ones cannot. One straightforward approach to apply any IRL algorithm would for example be to train two autoencoders for both domains that share higher layers + a domain confusion loss on the highest layer, should that not result in features that are directly usable? If not, why?\n- While the general argument that existing IRL algorithms will fail in the proposed setting seems reasonable it is still unfortunate that no attempts have been made to validate this empirically. No comparison is made regarding what happens when one e.g. performs supervised learning (behavioral cloning) using the expert observations and then transfers to the changed domain. How well would this work in practice ? Also, how fast can different IRL algorithms solve the target task in general (assuming a first person perspective) ?\n- Although I like the idea of presenting the experiments as being directed towards answering a specific set of questions I feel like the posed questions somewhat distract from the main theme of the paper. Question 2 suddenly makes the use of additional velocity information to be a main point of importance and the experiments regarding Question 3 in the end only contain evaluations regarding two hyperparameters (ignoring all other parameters such as the parameters for TRPO, the number of rollouts per iteration, the number of presented expert episodes and the design choices for the GAN). I understand that not all of these can be evaluated thoroughly in a conference paper but I feel like some more experiments or at least some discussion would have helped here.\n- The presented experimental evaluation somewhat hides the cost of TRPO training with the obtained reward function. How many roll-outs are necessary in each step?\n- The experiments lack some details: How are the expert trajectories obtained? The domains for the pendulum experiment seem identical except for coloring of the pole, is that correct (I am surprised this small change seems to have such a detrimental effect)? Figure 3 shows average performance over 5 trials, what about Figure 5 (if this is also average performance, what is the variance here)? Given that GANs are not easy to train, how often does the training fail/were you able to re-use the hyperparameters across all experiments?\n\nUPDATE:\nI updated the score. Please see my response to the rebuttal below.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Third Person Imitation Learning
["Bradly C Stadie", "Pieter Abbeel", "Ilya Sutskever"]
Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.
["demonstrations", "agent", "imitation learning", "third person imitation", "reinforcement learning", "problem", "task", "possible", "agents capable"]
https://openreview.net/forum?id=B16dGcqlx
https://openreview.net/pdf?id=B16dGcqlx
https://openreview.net/forum?id=B16dGcqlx&noteId=HJsxV1GVx
SJezwxzEg
B16dGcqlx
ICLR.cc/2017/conference/-/paper531/official/review
{"title": "Interesting idea but need more experiments", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposed a novel adversarial framework to train a model from demonstrations in a third-person perspective, to perform the task in the first-person view. Here the adversarial training is used to extract a novice-expert (or third-person/first-person) independent feature so that the agent can use to perform the same policy in a different view point.\n\nWhile the idea is quite elegant and novel (I enjoy reading it), more experiments are needed to justify the approach. Probably the most important issue is that there is no baseline, e.g., what if we train the model with the image from the same viewpoint? It should be better than the proposed approach but how close are they? How the performance changes when we gradually change the viewpoint from third-person to first-person? Another important question is that maybe the network just blindly remembers the policy, in this case, the extracted feature could be artifacts of the input image that implicitly counts the time tick in some way (and thus domain-agonistic), but can still perform reasonable policy. Since the experiments are conduct in a synthetic environment, this might happen. An easy check is to run the algorithm on multiple viewpoint and/or with blurred/differently rendered images, and/or with random initial conditions.\n\nOther ablation analysis is also needed. For example, I am not fully convinced by the gradient flipping trick used in Eqn. 5, and in the experiments there is no ablation analysis for that (GAN/EM style training versus gradient flipping trick). For the experiments, Fig. 4,5,6 does not have error bars and is not very convincing.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Third Person Imitation Learning
["Bradly C Stadie", "Pieter Abbeel", "Ilya Sutskever"]
Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.
["demonstrations", "agent", "imitation learning", "third person imitation", "reinforcement learning", "problem", "task", "possible", "agents capable"]
https://openreview.net/forum?id=B16dGcqlx
https://openreview.net/pdf?id=B16dGcqlx
https://openreview.net/forum?id=B16dGcqlx&noteId=SJezwxzEg
B1uj8o-Ee
B16dGcqlx
ICLR.cc/2017/conference/-/paper531/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "The paper extends the imitation learning paradigm to the case where the demonstrator and learner have different points of view. This is an important contribution, with several good applications. The main insight is to use adversarial training to learn a policy that is robust to this difference in perspective. This problem formulation is quite novel compared to the standard imitation learning literature (usually first-order perspective), though has close links to the literature on transfer learning (as explained in Sec.2).\n\nThe basic approach is clearly explained, and follows quite readily from recent literature on imitation learning and adversarial training.\n\nI would have expected to see comparison to the following methods added to Figure 3:\n1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.\n2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.\n3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). I understand this is how the expert data is collected for the demonstrator, but I don\u2019t see the performance results from just using this procedure on the learner (to compare to Fig.3 results).\n\nIncluding these results would in my view significantly enhance the impact of the paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Third Person Imitation Learning
["Bradly C Stadie", "Pieter Abbeel", "Ilya Sutskever"]
Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.
["demonstrations", "agent", "imitation learning", "third person imitation", "reinforcement learning", "problem", "task", "possible", "agents capable"]
https://openreview.net/forum?id=B16dGcqlx
https://openreview.net/pdf?id=B16dGcqlx
https://openreview.net/forum?id=B16dGcqlx&noteId=B1uj8o-Ee
S1Jpha-Vl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them.\n\nReward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when?\n\n\u201crivalry\u201d training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don\u2019t think that you really invented \u201ca new method to train an agent by enabling it to train against several opponents\u201d nor \u201ca new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI\u201d). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite.\n\nYour definition of Q-function (\u201cpredicts the score at the end of the game given the current state and selected action\u201d) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy).\n\nMinor:\n* Eq (1): the Q-net inside the max() is the target network, with different parameters theta\u2019\n* the Du et al. reference is missing the year\n* some of the other references should point at the corresponding published papers instead of the arxiv versions", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=S1Jpha-Vl
H1f6QHHVl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "Final review: Nice software contribution, expected more significant scientific contributions", "rating": "5: Marginally below acceptance threshold", "review": "The paper presents a new environment, called Retro Learning Environment (RLE), for reinforcement learning. The authors focus on Super Nintendo but claim that the interface supports many others (including ALE). Benchmark results are given for standard algorithms in 5 new Super Nintendo games, and some results using a new \"rivalry metric\".\n\nThese environments (or, more generally, standardized evaluation methods like public data sets, competitions, etc.) have a long history of improving the quality of AI and machine learning research. One example in the past few years was the Atari Learning Environment (ALE) which has now turned into a standard benchmark for comparison of algorithms and results. In this sense, the RLE could be a worthy contribution to the field by encouraging new challenging domains for research.\n\nThat said, the main focus of this paper is presenting this new framework and showcasing the importance of new challenging domains. The results of experiments themselves are for existing algorithms. There are some new results that show reward shaping and policy shaping (having a bias toward going right in Super Mario) help during learning. And, yes, domain knowledge helps, but this is obvious. The rivalry training is an interesting idea, when training against a different opponent, the learner overfits to that opponent and forgets to play against the in-game AI; but then oddly, it gets evaluated on how well it does against the in-game AI! \n\nAlso the part of the paper that describes the scientific results (especially the rivalry training) is less polished, so this is disappointing. In the end, I'm not very excited about this paper.\n\nI was hoping for a more significant scientific contribution to accompany in this new environment. It's not clear if this is necessary for publication, but also it's not clear that ICLR is the right venue for this work due to the contribution being mainly about the new code (for example, mloss.org could be a better 'venue', JMLR has an associated journal track for accompanying papers: http://www.jmlr.org/mloss/)\n\n--- Post response:\n\nThank you for the clarifications. Ultimately I have not changed my opinion on the paper. Though I do think RLE could have a nice impact long-term, there is little new science in this paper, ad it's either too straight-forward (reward shaping, policy-shaping) or not quite developed enough (rivalry training).", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=H1f6QHHVl
Sy3UiUz4l
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "Ok but limited contributions", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces a new reinforcement learning environment called \u00ab The Retro Learning Environment\u201d, that interfaces with the open-source LibRetro API to offer access to various emulators and associated games (i.e. similar to the Atari 2600 Arcade Learning Environment, but more generic). The first supported platform is the SNES, with 5 games (more consoles and games may be added later). Authors argue that SNES games pose more challenges than Atari\u2019s (due to more complex graphics, AI and game mechanics). Several DQN variants are evaluated in experiments, and it is also proposed to compare learning algorihms by letting them compete against each other in multiplayer games.\n\nI like the idea of going toward more complex games than those found on Atari 2600, and having an environment where new consoles and games can easily be added sounds promising. With OpenAI Universe and DeepMind Lab that just came out, though, I am not sure we really need another one right now. Especially since using ROMs of emulated games we do not own is technically illegal: it looks like this did not cause too much trouble for Atari but it might start raising eyebrows if the community moves to more advanced and recent games, especially some Nintendo still makes money from.\n\nBesides the introduction of the environment, it is good to have DQN benchmarks on five games, but this does not add a lot of value. The authors also mention as contribution \"A new benchmarking technique, allowing algorithms to compete against each other, rather than playing against the in-game AI\", but this seems a bit exaggerated to me: the idea of pitting AIs against each other has been at the core of many AI competitions for decades, so it is hardly something new. The finding that reinforcement learning algorithms tend to specialize to their opponent is also not particular surprising.\n\nOverall I believe this is an ok paper but I do not feel it brings enough to the table for a major conference. This does not mean, however, that this new environment won't find a spot in the (now somewhat crowded) space of game-playing frameworks.\n\nOther small comments:\n- There are lots of typos (way too many to mention them all)\n- It is said that Infinite Mario \"still serves as a benchmark platform\", however as far as I know it had to be shutdown due to Nintendo not being too happy about it\n- \"RLE requires an emulator and a computer version of the console game (ROM file) upon initialization rather than a ROM file only. The emulators are provided with RLE\" => how is that different from ALE that requires the emulator Stella which is also provided with ALE?\n- Why is there no DQN / DDDQN result on Super Mario?\n- It is not clear if Figure 2 displays the F-Zero results using reward shaping or not\n- The Du et al reference seems incomplete", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=Sy3UiUz4l
HJf3GfM4e
rkE3y85ee
ICLR.cc/2017/conference/-/paper281/official/review
{"title": "Review: Categorical Reparameterization with Gumbel-Softmax", "rating": "6: Marginally above acceptance threshold", "review": "The authors propose a method for reparameterization gradients with categorical distributions. This is done by using the Gumbel-Softmax distribution, a smoothened version of the Gumbel-Max trick for sampling from a multinomial.\n\nThe paper is well-written and clear. The application to the semi-supervised model in Kingma et al. (2014) makes sense for large classes, as well as its application to general stochastic computation graphs (Schulman et al., 2015).\n\nOne disconcerting point is that (from my understanding at least), this does not actually perform variational inference for discrete latent variable models. Rather, it changes the probability model itself and performs approximate inference on the modified (continuous relaxed) version of the model. This is fine in practice given that it's all approximate inference, but unlike previous variational inference advances either in more expressive approximations or faster computation (as noted by the different gradient estimators they compare to), the probability model is fundamentally changed.\n\nTwo critical points seem key: the sensitivity of the temperature, and whether this applies for non-one hot encodings of the categorical distribution (and thus sufficiently scale to high dimensions). Comments by the authors on this are welcome.\n\nThere is a related work by Rolfe (2016) on discrete VAEs, who also consider a continuous relaxed approach. This is worth citing and comparing to (or at least mentioning) in the paper.\n\nReferences\n\nRolfe, J. T. (2016). Discrete Variational Autoencoders. arXiv.org.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Categorical Reparameterization with Gumbel-Softmax
["Eric Jang", "Shixiang Gu", "Ben Poole"]
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.
["Deep learning", "Semi-Supervised Learning", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=rkE3y85ee
https://openreview.net/pdf?id=rkE3y85ee
https://openreview.net/forum?id=rkE3y85ee&noteId=HJf3GfM4e
SJ1R_ieEg
rkE3y85ee
ICLR.cc/2017/conference/-/paper281/official/review
{"title": "The paper is well written but the novelty of the paper is less clear", "rating": "6: Marginally above acceptance threshold", "review": "The paper combines Gumbel distribution with the popular softmax function to obtain a continuous distribution on the simplex that can approximate categorical samples. It is not surprising that Gumbel softmax outperforms other single sample gradient estimators. However, I am curious about how Gumbel compares with Dirichlet experimentally. \n\nThe computational efficiency of the estimator when training semi-supervised models is nice. However, the advantage will be greater when the number of classes are huge, which doesn't seem to be the case in a simple dataset like MNIST. I am wondering why the experiments are not done on a richer dataset. \n\nThe presentation of the paper is neat and clean. The experiments settings are clearly explained and the analysis appears to be complete. \n\nThe only concern I have is the novelty of this work. I consider this work as a nice but may be incremental (relatively small) contribution to our community. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Categorical Reparameterization with Gumbel-Softmax
["Eric Jang", "Shixiang Gu", "Ben Poole"]
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.
["Deep learning", "Semi-Supervised Learning", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=rkE3y85ee
https://openreview.net/pdf?id=rkE3y85ee
https://openreview.net/forum?id=rkE3y85ee&noteId=SJ1R_ieEg
Sk0G5NVEg
rkE3y85ee
ICLR.cc/2017/conference/-/paper281/official/review
{"title": "Interesting idea, encouraging results", "rating": "7: Good paper, accept", "review": "This paper introduces a continuous relaxation of categorical distribution, namely the the Gumbel-Softmax distribution, such that generative models with categorical random variables can be trained using reparameterization (path-derivative) gradients. The method is shown to improve upon other methods in terms of the achieved log-likelihoods of the resulting models. The main contribution, namely the method itself, is simple yet nontrivial and worth publishing, and seems effective in experiments. The paper is well-written, and I applaud the details provided in the appendix. The main application seems to be semi-supervised situations where you really want categorical variables.\n\n - P1: \"differentiable sampling mechanism for softmax\". \"sampling\" => \"approximate sampling\", since it's technically sampling from the Gumbal-softmax.\n \n - P3: \"backpropagtion\"\n \n - Section 4.1: Interesting experiments.\n \n - It would be interesting to report whether there is any discrepancy between the relaxed and non-relaxed models in terms of log-likelihood. Currently, only the likelihoods under the non-relaxed models are reported.\n \n - It is slightly discouraging that the temperature (a nuisance parameter) is used differently across experiments. It would be nice to give more details on whether you were succesful in learning the temperature, instead of annealing it; it would be interesting if that hyper-parameter could be eliminated.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Categorical Reparameterization with Gumbel-Softmax
["Eric Jang", "Shixiang Gu", "Ben Poole"]
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.
["Deep learning", "Semi-Supervised Learning", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=rkE3y85ee
https://openreview.net/pdf?id=rkE3y85ee
https://openreview.net/forum?id=rkE3y85ee&noteId=Sk0G5NVEg
HJKt06-Ng
HyEeMu_xx
ICLR.cc/2017/conference/-/paper58/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "The paper presents an architecture to incrementally attend to image regions - at multiple layers of a deep CNN. In contrast to most other models, the model does not apply a weighted average pooling in the earlier layers of the network but only in the last layer. Instead, the features are reweighted in each layer with the predicted attention.\n\n1.\tContribution of approach: The approach to use attention in this way is to my knowledge novel and interesting.\n2.\tQualitative results: \n2.1.\tI like the large number of qualitative results; however, I would have wished the focus would have been less on the \u201cnumber\u201d dataset and more on the Visual Genome dataset.\n2.2.\tThe qualitative results for the Genome dataset unfortunately does not provide the predicted attributes. It would be interesting to see e.g. the highest predicted attributes for a given query. So far the results only show the intermediate results.\n3.\tQualitative results:\n3.1.\tThe paper presents results on two datasets, one simulated dataset as well as Visual Genome. On both it shows moderate but significant improvements over related approaches.\n3.2.\tFor the visual genome dataset, it would be interesting to include a quantitative evaluation how good the localization performance is of the attention approach.\n3.3.\tIt would be interesting to get a more detailed understanding of the model by providing results for different CNN layers where the attention is applied.\n4.\tIt would be interesting to see results on more established tasks, e.g. VQA, where the model should similarly apply. In fact, the task on the numbers seems to be identical to the VQA task (input/output), so most/all state-of-the-art VQA approaches should be applicable.\n\n\nOther (minor/discussion points)\n-\tSomething seems wrong in the last two columns in Figure 11: the query \u201c7\u201d is blue not green. Either the query or the answer seem wrong.\n-\tSection 3: \u201cIn each layer, the each attended feature map\u201d -> \u201cIn each layer, each attended feature map\u201d\n-\tI think Appendix A would be clearer if it would be stated that is the attention mechanism used in SAN and which work it is based on.\n\n\nSummary:\nWhile the experimental evaluation could be improved with more detailed evaluation, comparisons, and qualitative results, the presented evaluation is sufficient to validate the approach. The approach itself is novel and interesting to my knowledge and speaks for acceptance.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
https://openreview.net/forum?id=HyEeMu_xx
https://openreview.net/pdf?id=HyEeMu_xx
https://openreview.net/forum?id=HyEeMu_xx&noteId=HJKt06-Ng
SynYYsrNe
HyEeMu_xx
ICLR.cc/2017/conference/-/paper58/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposes an attention mechanism which is essentially a gating on every spatial feature. Though they claim novelty through the attention being progressive, progressive attention has been done before [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections], and the element-wise multiplicative gates are very similar to convolutional LSTMs and Highway Nets. There is a lack of novelty and no significant results.\n\nPros:\n- The idea of progressive attention on features is good, but has been done in [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections]\n- Good visualisations.\n\nCons:\n- No progressive baselines were evaluated, e.g. STN and HAN at every layer acting on featuremaps.\n- Not clear how the query is fed into the localisation networks of baselines.\n- The difference in performance between author-made synthetic data and the Visual Genome datasets between baselines and PAN is very different. Why is this? There is no significant performance gain on any standard datasets.\n- No real novelty.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
https://openreview.net/forum?id=HyEeMu_xx
https://openreview.net/pdf?id=HyEeMu_xx
https://openreview.net/forum?id=HyEeMu_xx&noteId=SynYYsrNe
SyYWBfzNl
HyEeMu_xx
ICLR.cc/2017/conference/-/paper58/official/review
{"title": "Good paper, but would help to have experiments on a more benchmarked dataset", "rating": "6: Marginally above acceptance threshold", "review": "This paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet. The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset.\n\nOverall I think this is a well executed paper, with good experimental results and nice qualitative visualizations. The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches. \n\nAn important missing citation is Graves 2013 which had an early version of the attention model. \n\nMinor typo:\n\"It confins possible attributes..\" -> It confines..\n\"ImageNet (Deng et al., 2009), is used, and three additional\" -> \".., are used,\"", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Progressive Attention Networks for Visual Attribute Prediction
["Paul Hongsuck Seo", "Zhe Lin", "Scott Cohen", "Xiaohui Shen", "Bohyung Han"]
We propose a novel attention model which can accurately attend to target objects of various scales and shapes in images. The model is trained to gradually suppress irrelevant regions in an input image via a progressive attentive process over multiple layers of a convolutional neural network. The attentive process in each layer determines whether to pass or suppress features at certain spatial locations for use in the next layer. We further employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in visual attribute prediction tasks.
["Deep learning", "Computer vision", "Multi-modal learning"]
https://openreview.net/forum?id=HyEeMu_xx
https://openreview.net/pdf?id=HyEeMu_xx
https://openreview.net/forum?id=HyEeMu_xx&noteId=SyYWBfzNl
S1Y403RQe
SyVVJ85lg
ICLR.cc/2017/conference/-/paper277/official/review
{"title": "Final review: Sound paper but a very simple model, few experiments at start but more added.", "rating": "6: Marginally above acceptance threshold", "review": "In PALEO the authors propose a simple model of execution of deep neural networks. It turns out that even this simple model allows to quite accurately predict the computation time for image recognition networks both in single-machine and distributed settings.\n\nThe ability to predict network running time is very useful, and the paper shows that even a simple model does it reasonably, which is a strength. But the tests are only performed on a few networks of very similar type (AlexNet, Inception, NiN) and only in a few settings. Much broader experiments, including a variety of models (RNNs, fully connected, adversarial, etc.) in a variety of settings (different batch sizes, layer sizes, node placement on devices, etc.) would probably reveal weaknesses of the proposed very simplified model. This is why this reviewer considers this paper borderline -- it's a first step, but a very basic one and without sufficiently large experimental underpinning.\n\nMore experiments were added, so I'm updating my score.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
https://openreview.net/forum?id=SyVVJ85lg
https://openreview.net/pdf?id=SyVVJ85lg
https://openreview.net/forum?id=SyVVJ85lg&noteId=S1Y403RQe
H1GUJz-Ne
SyVVJ85lg
ICLR.cc/2017/conference/-/paper277/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper introduces an analytical performance model to estimate the training and evaluation time of a given network for different software, hardware and communication strategies. \nThe paper is very clear. The authors included many freedoms in the variables while calculating the run-time of a network such as the number of workers, bandwidth, platform, and parallelization strategy. Their results are consistent with the reported results from literature.\nFurthermore, their code is open-source and the live demo is looking good. \nThe authors mentioned in their comment that they will allow users to upload customized networks and model splits in the coming releases of the interface, then the tool can become very useful.\nIt would be interesting to see some newer network architectures with skip connections such as ResNet, and DenseNet.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
https://openreview.net/forum?id=SyVVJ85lg
https://openreview.net/pdf?id=SyVVJ85lg
https://openreview.net/forum?id=SyVVJ85lg&noteId=H1GUJz-Ne
SyzvzN7Qx
SyVVJ85lg
ICLR.cc/2017/conference/-/paper277/official/review
{"title": "Technically sound. Only useful under the assumption that the code is released.", "rating": "6: Marginally above acceptance threshold", "review": "This paper is technically sound. It highlights well the strengths and weaknesses of the proposed simplified model.\n\nIn terms of impact, its novelty is limited, in the sense that the authors did seemingly the right thing and obtained the expected outcomes. The idea of modeling deep learning computation is not in itself particularly novel. As a companion paper to an open source release of the model, it would meet my bar of acceptance in the same vein as a paper describing a novel dataset, which might not provide groundbreaking insights, yet be generally useful to the community.\n\nIn the absence of released code, even if the authors promise to release it soon, I am more ambivalent, since that's where all the value lies. It would also be a different story if the authors had been able to use this framework to make novel architectural decisions that improved training scalability in some way, and incorporated such new insights in the paper.\n\nUPDATED: code is now available. Revised review accordingly.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Paleo: A Performance Model for Deep Neural Networks
["Hang Qi", "Evan R. Sparks", "Ameet Talwalkar"]
Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called Paleo. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, Paleo can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that Paleo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.
["Deep learning"]
https://openreview.net/forum?id=SyVVJ85lg
https://openreview.net/pdf?id=SyVVJ85lg
https://openreview.net/forum?id=SyVVJ85lg&noteId=SyzvzN7Qx
r1bVaaUNx
rJY0-Kcll
ICLR.cc/2017/conference/-/paper472/official/review
{"title": "An interesting work to understand gradient descent as recurrent process", "rating": "6: Marginally above acceptance threshold", "review": "This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points.\n\nPros:\n\n- An interesting and feasible approach to meta-learning\n- Competitive results and proper comparison to state-of-the-art\n- Good recommendations for practical systems\n\nCons:\n\n- The analogy would be closer to GRUs than LSTMs\n- The description of the data separation in meta sets is hard to follow and could be visualized\n- The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest\n- Fig 2 doesn't have much value\n\nRemarks:\n\n- Small typo in 3.2: \"This means each coordinate has it\" -> its\n\n> We plan on releasing the code used in our evaluation experiments.\n\nThis would certainly be a major plus.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
https://openreview.net/forum?id=rJY0-Kcll
https://openreview.net/pdf?id=rJY0-Kcll
https://openreview.net/forum?id=rJY0-Kcll&noteId=r1bVaaUNx
SyiRxi7El
rJY0-Kcll
ICLR.cc/2017/conference/-/paper472/official/review
{"title": "Strong paper but presentation unclear at times", "rating": "8: Top 50% of accepted papers, clear accept", "review": "In light of the authors' responsiveness and the updates to the manuscript -- in particular to clarify the meta-learning task -- I am updating my score to an 8.\n\n-----\n\nThis manuscript proposes to tackle few-shot learning with neural networks by leveraging meta-learning, a classic idea that has seen a renaissance in the last 12 months. The authors formulate few-shot learning as a sequential meta-learning problem: each \"example\" includes a sequence of batches of \"training\" pairs, followed by a final \"test\" batch. The inputs at each \"step\" include the outputs of a \"base learner\" (e.g., training loss and gradients), as well as the base learner's current state (parameters). The paper applies an LSTM to this meta-learning problem, using the inner memory cells in the *second* layer to directly model the updated parameters of the base learner. In doing this, they note similarities between the respective update rules of LSTM memory cells and gradient descent. Updates to the LSTM meta-learner are computed based on the base learner's prediction loss for the final \"test\" batch. The authors make several simplifying assumptions, such as sharing weights across all second layer cells (analogous to using the same learning rate for all parameters). The paper recreates the Mini-ImageNet data set proposed in Vinyals et al 2016, and shows that the meta-learner LSTM is competitive with the current state-of-the-art (Matchin Networks, Vinyals 2016) on 1- and 5-shot learning.\n\nStrengths:\n- It is intriguing -- and in hindsight, natural -- to cast the few-shot learning problem as a sequential (meta-)learning problem. While the authors did not originate the general idea of persisting learning across a series of learning problems, I think it is fair to say that they have advanced the state of the art, though I cannot confidently assert its novelty as I am not deeply familiar with recent work on meta-learning.\n- The proposed approach is competitive with and outperforms Vinyals 2016 in 1-shot and 5-shot Mini-ImageNet experiments.\n- The base learner in this setting (simple ConvNet classifier) is quite different from the nearest-neighbor-on-top-of-learned-embedding approach used in Vinyals 2016. It is always exciting when state-of-the-art results can be reported using very different approaches, rather than incremental follow-up work.\n- As far as I know, the insight about the relationship between the memory cell and gradient descent updates is novel here. It is interesting regardless.\n- The paper offers several practical insights about how to design and train an LSTM meta-learner, which should make it easier for others to replicate this work and apply these ideas to new problems. These include proper initialization, weight sharing across coordinates, and the importance of normalizing/rescaling the loss, gradient, and parameter inputs. Some of the insights have been previously described (the importance of simulating test conditions during meta-training; assuming independence between meta-learner and base learner parameters when taking gradients with respect to the meta-learner parameters), but the discussion here is useful nonetheless.\n\nWeaknesses:\n- The writing is at times quite opaque. While it describes very interesting work, I would not call the paper an enjoyable read. It took me multiple passes (as well as consulting related work) to understand the general learning problem. The task description in Section 2 (Page 2) is very abstract and uses notation and language that is not common outside of this sub-area. The paper could benefit from a brief concrete example (based on MNIST is fine), perhaps paired with a diagram illustrating a sequence of few-shot learning tasks. This would definitely make it accessible to a wider audience.\n- Following up on that note, the precise nature of the N-class, few-shot learning problem here is unclear to me. Specifically, the Mini-ImageNet data set has 100 labels, of which 64/16/20 are used during meta-training/validation/testing. Does this mean that only 64/100 classes are observed through meta-training? Or does it mean that only 64/100 are observed in each batch, but on average all 100 are observed during meta-training? If it's the former, how many outputs does the softmax layer of the ConvNet base learner have during meta-training? 64 (only those observed in training) or 100 (of which 36 are never observed)? Many other details like these are unclear (see question).\n- The plots in Figure 2 are pretty uninformative in and of themselves, and the discussion section offers very little insight around them.\n\nThis is an interesting paper with convincing results. It seems like a fairly clear accept, but the presentation of the ideas and work therein could be improved. I will definitely raise my score if the writing is improved.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
https://openreview.net/forum?id=rJY0-Kcll
https://openreview.net/pdf?id=rJY0-Kcll
https://openreview.net/forum?id=rJY0-Kcll&noteId=SyiRxi7El
BJPokH_Vg
rJY0-Kcll
ICLR.cc/2017/conference/-/paper472/official/review
{"title": "nice paper", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This work presents an LSTM based meta-learning framework to learn the optimization algorithm of a another learning algorithm (here a NN).\nThe paper is globally well written and the presentation of the main material is clear. The crux of the paper: drawing the parallel between Robbins Monroe update rule and the LSTM update rule and exploit it to satisfy the two main desiderata of few shot learning (1- quick acquisition of new knowledge, 2- slower extraction of general transferable knowledge) is intriguing. \n\nSeveral tricks re-used from (Andrychowicz et al. 2016) such as parameter sharing and normalization, and novel design choices (specific implementation of batch normalization) are well motivated. \nThe experiments are convincing. This is a strong paper. My only concerns/questions are the following:\n\n1. Can it be redundant to use the loss, gradient and parameters as input to the meta-learner? Did you do ablative studies to make sure simpler combinations are not enough.\n2. It would be great if other architectural components of the network can be learned in a similar fashion (number of neurons, type of units, etc.). Do you have an opinion about this?\n3. The related work section (mainly focused on meta learning) is a bit shallow. Meta-learning is a rather old topic and similar approaches have been tried to solve the same problem even if they were not using LSTMs:\n - Samy Bengio PhD thesis (1989) is all about this ;-)\n - Use of genetic programming for the search of a new learning rule for neural networks (S. Bengio, Y. Bengio, and J. Cloutier. 1994)\n - I am convince Schmidhuber has done something, make sure you find it and update related work section. \n\nOverall, I like the paper. I believe the discussed material is relevant to a wide audience at ICLR. \n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Optimization as a Model for Few-Shot Learning
["Sachin Ravi", "Hugo Larochelle"]
Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.
["model", "optimization", "learning", "learning optimization", "deep neural networks", "great success", "large data domain", "learning tasks", "examples", "class"]
https://openreview.net/forum?id=rJY0-Kcll
https://openreview.net/pdf?id=rJY0-Kcll
https://openreview.net/forum?id=rJY0-Kcll&noteId=BJPokH_Vg
BJF0H7M4g
rkEFLFqee
ICLR.cc/2017/conference/-/paper497/official/review
{"title": "well-executed but limited novelty and impact", "rating": "7: Good paper, accept", "review": "This paper introduces an approach for future frame prediction in videos by decoupling motion and content to be encoded separately, and additionally using multi-scale residual connections. Qualitative and quantitative results are shown on KTH, Weizmann, and UCF-101 datasets.\n\nThe idea of decoupling motion and content is interesting, and seems to work well for this task. However, the novelty is relatively incremental given previous cited work on multi-stream networks, and it is not clear that this particular decoupling works well or is of broader interest beyond the specific task of future frame prediction.\n\nWhile results on KTH and Weizmann are convincing and significantly outperform baselines, the results are less impressive on less constrained UCF-101 dataset. The qualitative examples for UCF-101 are not convincing, as discussed in the pre-review question.\n\nOverall this is a well-executed work with an interesting though not extremely novel idea. Given the limited novelty of decoupling motion and content and impact beyond the specific application, the paper would be strengthened if this could be shown to be of broader interest e.g. for other video tasks.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Decomposing Motion and Content for Natural Video Sequence Prediction
["Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee"]
We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the pro- posed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=rkEFLFqee
https://openreview.net/pdf?id=rkEFLFqee
https://openreview.net/forum?id=rkEFLFqee&noteId=BJF0H7M4g
HkUoXJW4e
rkEFLFqee
ICLR.cc/2017/conference/-/paper497/official/review
{"title": "Interesting architecture for an important problem, but requires additional experiments.", "rating": "7: Good paper, accept", "review": "1) Summary\n\nThis paper investigates the usefulness of decoupling appearance and motion information for the problem of future frame prediction in natural videos. The method introduces a novel two-stream encoder-decoder architecture, MCNet, consisting of two separate encoders -- a convnet on single frames and a convnet+LSTM on sequences of temporal differences -- followed by combination layers (stacking + convolutions) and a deconvolutional network decoder leveraging also residual connections from the two encoders. The architecture is trained end-to-end using the objective and adversarial training strategy of Mathieu et al.\n\n2) Contributions\n\n+ The architecture seems novel and is well motivated. It is also somewhat related to the two-stream networks of Simonyan & Zisserman, which are very effective for real-world action recognition.\n+ The qualitative results are numerous, insightful, and very convincing (including quantitatively) on KTH & Weizmann, showing the benefits of decoupling content and motion for simple scenes with periodic motions, as well as the need for residual connections.\n\n3) Suggestions for improvement\n\nStatic dataset bias:\nIn response to the pre-review concerns about the observed static nature of the qualitative results, the authors added a simple baseline consisting in copying the pixels of the last observed frame. On the one hand, the updated experiments on KTH confirm the good results of the method in these conditions. On the other hand, the fact that this baseline is better than all other methods (not just the authors's) on UCF101 casts some doubts on whether reporting average statistics on UCF101 is insightful enough. Although the authors provide some qualitative analysis pertaining to the quantity of motion, further quantitative analysis seems necessary to validate the performance of this and other methods on future frame prediction. At least, the results on UCF101 should be disambiguated with respect to the type of scene, for instance by measuring the overall quantity of motion (e.g., l2 norm of time differences) and reporting PSNR and SSIM per quartile / decile. Ideally, other realistic datasets than UCF101 should be considered in complement. For instance, the Hollywood 2 dataset of Marszalek et al would be a good candidate, as it focuses on movies and often contains complex actor, camera, and background motions that would make the \"pixel-copying\" baseline very poor. Experiments on video datasets beyond actions, like the KITTI tracking benchmark, would also greatly improve the paper.\n\nAdditional recognition experiments:\nAs mentioned in pre-review questions, further UCF-101 experiments on action recognition tasks by fine-tuning would also greatly improve the paper. Classifying videos indeed requires learning both appearance and motion features, and the two-stream encoder + combination layers of the MCNet+Res architecture seem particularly adapted, if they indeed allowed for unsupervised pre-trainining of content and motion representations, as postulated by the authors. These experiments would also contribute to dispelling the aforementioned concerns about the static nature of the learned representations.\n\n4) Conclusion\n\nOverall, this paper proposes an interesting architecture for an important problem, but requires additional experiments to substantiate the claims made by the authors. If the authors make the aforementioned additional experiments and the results are convincing, then this paper would be clearly relevant for ICLR.\n\n5) Post-rebuttal final decision\n\nThe authors did a significant amount of additional work, following the suggestions made by the reviewers, and providing additional compelling experimental evidence. This makes this one of the most experimentally thorough ones for this problem. I, therefore, increase my rating, and suggest to accept this paper. Good job!", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Decomposing Motion and Content for Natural Video Sequence Prediction
["Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee"]
We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the pro- posed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=rkEFLFqee
https://openreview.net/pdf?id=rkEFLFqee
https://openreview.net/forum?id=rkEFLFqee&noteId=HkUoXJW4e
HySrJeGNl
rkEFLFqee
ICLR.cc/2017/conference/-/paper497/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "The paper presents a method for predicting video sequences in the lines of Mathieu et al. The contribution is the separation of the predictor into two different networks, picking up motion and content, respectively.\n\nThe paper is very interesting, but the novelty is low compared to the referenced work. As also pointed out by AnonReviewer1, there is a similarity with two-stream networks (and also a whole body of work building on this seminal paper). Separating motion and content has also been proposed for other applications, e.g. pose estimation.\n\nDetails :\n\nThe paper can be clearly understood if the basic frameworks (like GANs) are known, but the presentation is not general and good enough for a broad public.\n\nExample : Losses (7) to (9) are well known from the Matthieu et al. paper. However, to make the paper self-contained, they should be properly explained, and it should be mentioned that they are \"additional\" losses. The main target is the GAN loss. The adversarial part of the paper is not properly enough introduced. I do agree, that adversarial training is now well enough known in the community, but it should still be properly introduced. This also involves the explanation that L_Disc is the loss for a second network, the discriminator and explaining the role of both etc.\n\nEquation (1) : c is not explained (are these motion vectors)? c is also overloaded with the feature dimension c'.\n\nThe residual nature of the layer should be made more apparent in equation (3).\n\nThere are several typos, absence of articles and prepositions (\"of\" etc.). The paper should be reread carefully.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Decomposing Motion and Content for Natural Video Sequence Prediction
["Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee"]
We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the pro- posed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=rkEFLFqee
https://openreview.net/pdf?id=rkEFLFqee
https://openreview.net/forum?id=rkEFLFqee&noteId=HySrJeGNl
Hky8MaWVx
BkIqod5ll
ICLR.cc/2017/conference/-/paper463/official/review
{"title": "Important problem, but lacks clarity and I'm not sure what the contribution is.", "rating": "3: Clear rejection", "review": "This work proposes a convolutional architecture for any graph-like input data (where the structure is example-dependent), or more generally, any data where the input dimensions that are related by a similarity matrix. If instead each input example is associated with a transition matrix, then a random walk algorithm is used generate a similarity matrix.\n\nDeveloping convolutional or recurrent architectures for graph-like data is an important problem because we would like to develop neural networks that can handle inputs such as molecule structures or social networks. However, I don't think this work contributes anything significant to the work that has already been done in this area. \n\nThe two main proposals I see in this paper are:\n1) For data associated with a transition matrix, this paper proposes that the transition matrix be converted to a similarity matrix. This seems obvious.\n2) For data associated with a similarity matrix, the k nearest neighbors of each node are computed and supply the context information for that node. This also seems obvious.\n\nPerhaps I have misunderstood the contribution, but the presentation also lacks clarity, and I cannot recommend this paper for publication. \n\nSpecific Comments:\n1) On page 4: \"An interesting attribute of this convolution, as compared to other convolutions on graphs is that, it preserves locality while still being applicable over different graphs with different structures.\" This is false; the other proposed architectures can be applied to inputs with different structures (e.g. Duvenaud et. al., Lusci et. al. for NN architectures on molecules specifically). ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Convolutional Neural Networks Generalization Utilizing the Data Graph Structure
["Yotam Hechtlinger", "Purvasha Chakravarti", "Jining Qin"]
Convolutional Neural Networks have proved to be very efficient in image and audio processing. Their success is mostly attributed to the convolutions which utilize the geometric properties of a low - dimensional grid structure. This paper suggests a generalization of CNNs to graph-structured data with varying graph structure, that can be applied to standard regression or classification problems by learning the graph structure of the data. We propose a novel convolution framework approach on graphs which utilizes a random walk to select relevant nodes. The convolution shares weights on all features, providing the desired parameter efficiency. Furthermore, the additional computations in the training process are only executed once in the pre-processing step. We empirically demonstrate the performance of the proposed CNN on MNIST data set, and challenge the state-of-the-art on Merck molecular activity data set.
["Supervised Learning", "Deep learning"]
https://openreview.net/forum?id=BkIqod5ll
https://openreview.net/pdf?id=BkIqod5ll
https://openreview.net/forum?id=BkIqod5ll&noteId=Hky8MaWVx
S1bH1BMNg
BkIqod5ll
ICLR.cc/2017/conference/-/paper463/official/review
{"title": "Final review.", "rating": "6: Marginally above acceptance threshold", "review": "Update: I thank the authors for their comments! After reading them, I decided to increase the rating.\n\nThis paper proposes a variant of the convolution operation suitable for a broad class of graph structures. For each node in the graph, a set of neighbours is devised by means of random walk (the neighbours are ordered by the expected number of visits). As a result, the graph is transformed into a feature matrix resembling MATLAB\u2019s/Caffe\u2019s im2col output. The convolution itself becomes a matrix multiplication. \n\nAlthough the proposed convolution variant seems reasonable, I\u2019m not convinced by the empirical evaluation. The MNIST experiment looks especially suspicious. I don\u2019t think that this dataset is appropriate for the demonstration purposes in this case. In order to make their method applicable to the data, the authors remove important structural information (relative locations of pixels) thus artificially increasing the difficulty of the task. At the same time, they are comparing their approach with regular CNNs and conclude that the former performs poorly (and does not even reach an acceptable accuracy for the particular dataset).\n\nI guess, to justify the presence of MNIST (or similar datasets) in the experimental section, the authors should modify their method to incorporate additional graph structure (e.g. relative locations of nodes) in cases when the relation between nodes cannot be fully described by a similarity matrix.\n\nI believe, in its current form, the paper is not yet ready for publication but may be later resubmitted to a workshop or another conference after the concern above is addressed.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Convolutional Neural Networks Generalization Utilizing the Data Graph Structure
["Yotam Hechtlinger", "Purvasha Chakravarti", "Jining Qin"]
Convolutional Neural Networks have proved to be very efficient in image and audio processing. Their success is mostly attributed to the convolutions which utilize the geometric properties of a low - dimensional grid structure. This paper suggests a generalization of CNNs to graph-structured data with varying graph structure, that can be applied to standard regression or classification problems by learning the graph structure of the data. We propose a novel convolution framework approach on graphs which utilizes a random walk to select relevant nodes. The convolution shares weights on all features, providing the desired parameter efficiency. Furthermore, the additional computations in the training process are only executed once in the pre-processing step. We empirically demonstrate the performance of the proposed CNN on MNIST data set, and challenge the state-of-the-art on Merck molecular activity data set.
["Supervised Learning", "Deep learning"]
https://openreview.net/forum?id=BkIqod5ll
https://openreview.net/pdf?id=BkIqod5ll
https://openreview.net/forum?id=BkIqod5ll&noteId=S1bH1BMNg
Sk0nICB4l
BkIqod5ll
ICLR.cc/2017/conference/-/paper463/official/review
{"title": "Modifies the way neighbors are computed for Graph-convolutional networks, but doesn't show that this modification is an improvement..", "rating": "3: Clear rejection", "review": "Previous literature uses data-derived adjacency matrix A to obtain neighbors to use as foundation of graph convolution. They propose extending the set of neighbors by additionally including nodes reachable by i<=k steps in this graph. This introduces an extra tunable parameter k, so it needs some justification over the previous k=1 solution. In one experiment provided\u00a0(Merk), using k=1 worked better. They don't specify which k that used, just that it was big enough for their to be p=5 nodes obtained as neighbors. In the second experiment (MNIST), they used k=1 for their experiments, which is what previous work (Coats & Ng 2011) proposed as well. A compelling experiment would compare to k=1 and show that using k>1 gives improvement strong enough to justify an extra hyper-parameter."}
review
2017
ICLR.cc/2017/conference
Convolutional Neural Networks Generalization Utilizing the Data Graph Structure
["Yotam Hechtlinger", "Purvasha Chakravarti", "Jining Qin"]
Convolutional Neural Networks have proved to be very efficient in image and audio processing. Their success is mostly attributed to the convolutions which utilize the geometric properties of a low - dimensional grid structure. This paper suggests a generalization of CNNs to graph-structured data with varying graph structure, that can be applied to standard regression or classification problems by learning the graph structure of the data. We propose a novel convolution framework approach on graphs which utilizes a random walk to select relevant nodes. The convolution shares weights on all features, providing the desired parameter efficiency. Furthermore, the additional computations in the training process are only executed once in the pre-processing step. We empirically demonstrate the performance of the proposed CNN on MNIST data set, and challenge the state-of-the-art on Merck molecular activity data set.
["Supervised Learning", "Deep learning"]
https://openreview.net/forum?id=BkIqod5ll
https://openreview.net/pdf?id=BkIqod5ll
https://openreview.net/forum?id=BkIqod5ll&noteId=Sk0nICB4l
ByQ-cqT7x
rJfMusFll
ICLR.cc/2017/conference/-/paper124/official/review
{"title": "clearly written, natural extension of previous work", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper discuss a \"batch\" method for RL setup to improve chat-bots.\nThe authors provide nice overview of the RL setup they are using and present an algorithm which is similar to previously published on line setup for the same problem. They make a comparison to the online version and explore several modeling choices. \n\nI find the writing clear, and the algorithm a natural extension of the online version.\n\nBelow are some constructive remarks:\n- Comparison of the constant vs. per-state value function: In the artificial experiment there was no difference between the two while on the real-life task there was. It will be good to understand why, and add this to the discussion. Here is one option:\n- For the artificial task it seems like you are giving the constant value function an unfair advantage, as it can update all the weights of the model, and not just the top layer, like the per-state value function.\n- section 2.2:\n sentence before last: s' is not defined. \n last sentence: missing \"... in the stochastic case.\" at the end.\n- Section 4.1 last paragraph: \"While Bot-1 is not significant ...\" => \"While Bot-1 is not significantly different from ML ...\"\n\n\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Batch Policy Gradient Methods for Improving Neural Conversation Models
["Kirthevasan Kandasamy", "Yoram Bachrach", "Ryota Tomioka", "Daniel Tarlow", "David Carter"]
We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (\bpg). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.
["Natural language processing", "Reinforcement Learning"]
https://openreview.net/forum?id=rJfMusFll
https://openreview.net/pdf?id=rJfMusFll
https://openreview.net/forum?id=rJfMusFll&noteId=ByQ-cqT7x
ByVsGkMVx
rJfMusFll
ICLR.cc/2017/conference/-/paper124/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "This paper extends neural conversational models into the batch reinforcement learning setting. The idea is that you can collect human scoring data for some responses from a dialogue model, however such scores are expensive. Thus, it is natural to use off-policy learning \u2013 training a base policy on unsupervised data, deploying that policy to collect human scores, and then learning off-line from those scores.\n\nWhile the overall contribution is modest (extending off-policy actor-critic to the application of dialogue generation), the approach is well-motivated, and the paper is written clearly and is easy to understand. \n\nMy main concern is that the primary dataset used (restaurant recommendations) is very small (6000 conversations). In fact, it is several orders of magnitude smaller than other datasets used in the literature (e.g. Twitter, the Ubuntu Dialogue Corpus) for dialogue generation. It is a bit surprising to me that RNN chatbots (with no additional structure) are able to generate reasonable utterances on such a small dataset. Wen et al. (2016) are able to do this on a similarly small restaurant dataset, but this is mostly because they map directly from dialogue states to surface form, rather than some embedding representation of the context. Thus, it remains to be seen if the approaches in this paper also result in improvements when much more unsupervised data is available.\n\nReferences:\n\nWen, Tsung-Hsien, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. \"A Network-based End-to-End Trainable Task-oriented Dialogue System.\" arXiv preprint arXiv:1604.04562 (2016).\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Batch Policy Gradient Methods for Improving Neural Conversation Models
["Kirthevasan Kandasamy", "Yoram Bachrach", "Ryota Tomioka", "Daniel Tarlow", "David Carter"]
We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (\bpg). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.
["Natural language processing", "Reinforcement Learning"]
https://openreview.net/forum?id=rJfMusFll
https://openreview.net/pdf?id=rJfMusFll
https://openreview.net/forum?id=rJfMusFll&noteId=ByVsGkMVx
H1bSmrx4x
rJfMusFll
ICLR.cc/2017/conference/-/paper124/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "The author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chat-bots.\nThe approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on \"clarification regarding batch vs. online setting\").\nThe artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Batch Policy Gradient Methods for Improving Neural Conversation Models
["Kirthevasan Kandasamy", "Yoram Bachrach", "Ryota Tomioka", "Daniel Tarlow", "David Carter"]
We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (\bpg). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.
["Natural language processing", "Reinforcement Learning"]
https://openreview.net/forum?id=rJfMusFll
https://openreview.net/pdf?id=rJfMusFll
https://openreview.net/forum?id=rJfMusFll&noteId=H1bSmrx4x
rkEX3x_Nx
rywUcQogx
ICLR.cc/2017/conference/-/paper550/official/review
{"title": "Unclear about the contribution ", "rating": "3: Clear rejection", "review": "It is not clear to me at all what this paper is contributing. Deep CCA (Andrew et al, 2013) already gives the gradient derivation of the correlation objective with respect to the network outputs which are then back-propagated to update the network weights. Again, the paper gives the gradient of the correlation (i.e. the CCA objective) w.r.t. the network outputs, so it is confusing to me when authors say that their differentiable version enables them to back-propagate directly through the computation of CCA. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Differentiable Canonical Correlation Analysis
["Matthias Dorfer", "Jan Schl\u00fcter", "Gerhard Widmer"]
Canonical Correlation Analysis (CCA) computes maximally-correlated linear projections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, surpassing both Deep CCA and a multi-view network with freely-learned projections. We assume that Differentiable CCA could be a useful building block for many multi-modality tasks.
["Multi-modal learning"]
https://openreview.net/forum?id=rywUcQogx
https://openreview.net/pdf?id=rywUcQogx
https://openreview.net/forum?id=rywUcQogx&noteId=rkEX3x_Nx
SJ-aT5ZNg
rywUcQogx
ICLR.cc/2017/conference/-/paper550/official/review
{"title": "paper needs to be more explicit", "rating": "4: Ok but not good enough - rejection", "review": "After a second look of the paper, I am still confused what the authors are trying to achieve.\n\nThe CCA objective is not differentiable in the sense that the sum of singular values (trace norm) of T is not differentiable. It appears to me (from the title, and section 3), the authors are trying to solve this problem. However,\n\n-- Did the authors simply reformulate the CCA objective or change the objective? The authors need to be explicit here.\n\n-- What is the relationship between the retrieval objective and the \"CCA layer\"? I could imagine different ways of combining them, such as combination or bi-level optimization. And I could not find discussion about this in section 3. For this, equations would be helpful.\n\n-- Even though the CCA objective is not differentiable in the above sense, it has not caused major problem for training (e.g., in principle we need batch training, but empirically using large minibatches works fine). The authors need to justify why the original gradient computation is problematic for what the authors are trying to achieve. From the authors' response to my question 2, it seems they still use SVD of T, so I am not sure if the proposed method has advantage in computational efficiency.\n\nIn terms of paper organization, it is better to describe the retrieval objective earlier than in the experiments. And I still encourage the authors to conduct the comparison with contrastive loss that I mentioned in my previous comments. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Differentiable Canonical Correlation Analysis
["Matthias Dorfer", "Jan Schl\u00fcter", "Gerhard Widmer"]
Canonical Correlation Analysis (CCA) computes maximally-correlated linear projections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, surpassing both Deep CCA and a multi-view network with freely-learned projections. We assume that Differentiable CCA could be a useful building block for many multi-modality tasks.
["Multi-modal learning"]
https://openreview.net/forum?id=rywUcQogx
https://openreview.net/pdf?id=rywUcQogx
https://openreview.net/forum?id=rywUcQogx&noteId=SJ-aT5ZNg
ry-2Cn1Eg
rywUcQogx
ICLR.cc/2017/conference/-/paper550/official/review
{"title": "Needs significant work before it can be publishable", "rating": "3: Clear rejection", "review": "The authors propose to combine a CCA objective with a downstream loss. This is a really nice and natural idea. However, both the execution and presentation leave a lot to be desired in the current version of the paper.\n\nIt is not clear what the overall objective is. This was asked in a pre-review question but the answer did not fully clarify it for me. Is it the sum of the CCA objective and the final (top-layer) objective, including the CCA constraints? Is there some interpolation of the two objectives? \n\nBy saying that the top-layer objective is \"cosine distance\" or \"squared cosine distance\", do you really mean you are just minimizing this distance between the matched pairs in the two views? If so, then of course that does not work out of the box without the intervening CCA layer: You could minimize it by setting all of the projections to a single point. A better comparison would be against a contrastive loss like the Hermann & Blunsom one mentioned in the reviewer question, which aims to both minimize the distance for matched pairs and separate mismatched ones (where \"mismatched\" ones can be uniformly drawn, or picked in some cleverer way). But other discriminative top-layer objectives that are tailored to a downstream task could make sense.\n\nThere is some loose terminology in the paper. The authors refer to the \"correlation\" and \"cross-correlation\" between two vectors. \"Correlation\" normally applies to scalars, so you need to define what you mean here. \"Cross-correlation\" typically refers to time series. In eq. (2) you are taking the max of a matrix. Finally I am not too sure in what way this approach is \"fully differentiable\" while regular CCA is not -- perhaps it is worth revisiting this term as well.\n\nAlso just a small note about the relationship between cosine distance and correlation: they are related when we view the dimensions of each of the two vectors as samples of a single random variable. In that case the cosine distance of the (mean-normalized) vectors is the same as the correlation between the two corresponding random variables. In CCA we are viewing each dimension of the vectors as its own random variable. So I fear the claim about cosine distance and correlation is a bit of a red herring here.\n\nA couple of typos:\n\n\"prosed\" --> \"proposed\"\n\"allong\" --> \"along\"\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Differentiable Canonical Correlation Analysis
["Matthias Dorfer", "Jan Schl\u00fcter", "Gerhard Widmer"]
Canonical Correlation Analysis (CCA) computes maximally-correlated linear projections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, surpassing both Deep CCA and a multi-view network with freely-learned projections. We assume that Differentiable CCA could be a useful building block for many multi-modality tasks.
["Multi-modal learning"]
https://openreview.net/forum?id=rywUcQogx
https://openreview.net/pdf?id=rywUcQogx
https://openreview.net/forum?id=rywUcQogx&noteId=ry-2Cn1Eg
rypQ3tJ4e
HkuVu3ige
ICLR.cc/2017/conference/-/paper579/official/review
{"title": "This paper investigates the issue of orthogonality of the transfer weight matrix in RNNs and suggests an optimization formulation on the manifold of (semi)orthogonal matrices.", "rating": "5: Marginally below acceptance threshold", "review": "Vanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section:\n\n1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank.\n\n2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task.\n\n3- I was not able to find the number of hidden units used for RNNs in different tasks.\n\n4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used.\n\n5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right?\n\n6- What do we learn from Figure 2? It is left without any discussion.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
https://openreview.net/forum?id=HkuVu3ige
https://openreview.net/pdf?id=HkuVu3ige
https://openreview.net/forum?id=HkuVu3ige&noteId=rypQ3tJ4e
ryRAK-8Vg
HkuVu3ige
ICLR.cc/2017/conference/-/paper579/official/review
{"title": "Interesting investigation into orthogonal parametrizations and initializations for RNNs", "rating": "7: Good paper, accept", "review": "This paper investigates the impact of orthogonal weight matrices on learning dynamics in RNNs. The paper proposes a variety of interesting optimization formulations that enforce orthogonality in the recurrent weight matrix to varying degrees. The experimental results demonstrate several conclusions: enforcing exact orthogonality does not help learning, while enforcing soft orthogonality or initializing to orthogonal weights can substantially improve learning. While some of the optimization methods proposed currently require matrix inversion and are therefore slow in wall clock time, orthogonal initialization and some of the soft orthogonality constraints are relatively inexpensive and may find their way into practical use.\n\nThe experiments are generally done to a high standard and yield a variety of useful insights, and the writing is clear.\n\nThe experimental results are based on using a fixed learning rate for the different regularization strengths. Learning speed might be highly dependent on this, and different strengths may admit different maximal stable learning rates. It would be instructive to optimize the learning rate for each margin separately (maybe on one of the shorter sequence lengths) to see how soft orthogonality impacts the stability of the learning process. Fig. 5, for instance, shows that a sigmoid improves stability\u2014but perhaps slightly reducing the learning rate for the non-sigmoid Gaussian prior RNN would make the learning well-behaved again for weightings less than 1.\n\nFig. 4 shows singular values converging around 1.05 rather than 1. Does initializing to orthogonal matrices multiplied by 1.05 confer any noticeable advantage over standard orthogonal matrices? Especially on the T=10K copy task?\n\n\u201cCuriously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal suggesting that evolution away from orthogonality is not a serious problem on this task.\u201d This is consistent with the analysis given in Saxe et al. 2013, where for deep linear nets, if a singular value is initialized to 1 but dies away during training, this is because it must be zero to implement the desired input-output map. More broadly, an open question has been whether orthogonality is useful as an initialization, as proposed by Saxe et al., where its role is mainly as a preconditioner which makes optimization proceed quickly but doesn\u2019t fundamentally change the optimization problem; or whether it is useful as a regularizer, as proposed by Arjovsky et al. 2015 and Henaff et al. 2015, that is, as an additional constraint in the optimization problem (minimize loss subject to weights being orthogonal). These experiments seem to show that mere initialization to orthogonal weights is enough to reap an optimization speed advantage, and that too much regularization begins to hurt performance\u2014i.e., substantially changing the optimization problem is undesirable. This point is also apparent in Fig. 2: In terms of the training loss on MNIST (Fig. 2), no margin does almost indistinguishably from a margin of 1 or .1. However in terms of accuracy, a margin of .1 is best. This shows that large or nonexistent margins (i.e., orthogonal initializations) enable fast optimization of the training loss, but among models that attain similar training loss, the more nearly orthogonal weights perform better. This starts to separate out the optimization speed advantage conferred by orthogonality from the regularization advantage it confers. It may be useful to more explicitly discuss the initialization vs regularization dimension in the text.\n\nOverall, this paper contributes a variety of techniques and intuitions which are likely to be useful in training RNNs.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
https://openreview.net/forum?id=HkuVu3ige
https://openreview.net/pdf?id=HkuVu3ige
https://openreview.net/forum?id=HkuVu3ige&noteId=ryRAK-8Vg
ByCXAcHVl
HkuVu3ige
ICLR.cc/2017/conference/-/paper579/official/review
{"title": "Interesting question and proposed approach, with significance restricted by limited experimental settings.", "rating": "5: Marginally below acceptance threshold", "review": "The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution.\n\nThe paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously.\n\nThe experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don't shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task).\n\nFurthermore, while investigating pure RNN's makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there's any benefit when embedded within an LSTM, although this seems doubtful).\n\nOverall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures).", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
On orthogonality and learning recurrent networks with long term dependencies
["Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal"]
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we we can bound matrix norms and therein control the degree of expansivity induced during backpropagation.
["Deep learning"]
https://openreview.net/forum?id=HkuVu3ige
https://openreview.net/pdf?id=HkuVu3ige
https://openreview.net/forum?id=HkuVu3ige&noteId=ByCXAcHVl
H1cHmCBNg
B1KBHtcel
ICLR.cc/2017/conference/-/paper484/official/review
{"title": "An Application of PN Network", "rating": "4: Ok but not good enough - rejection", "review": "This paper addresses automated argumentation mining using pointer network. Although the task and the discussion is interesting, the contribution and the novelty is marginal because this is a single-task application of PN among many potential tasks.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Here's My Point: Argumentation Mining with Pointer Networks
["Peter Potash", "Alexey Romanov", "Anna Rumshisky"]
One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary focus on classifying types of argument components. In order to solve this problem, we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.
["Natural language processing"]
https://openreview.net/forum?id=B1KBHtcel
https://openreview.net/pdf?id=B1KBHtcel
https://openreview.net/forum?id=B1KBHtcel&noteId=H1cHmCBNg
HkJF5ei7l
B1KBHtcel
ICLR.cc/2017/conference/-/paper484/official/review
{"title": "Solid work, fit unclear", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a model for the task of argumentation mining (labeling the set of relationships between statements expressed as sentence-sized spans in a short text). The model combines a pointer network component that identifies links between statements and a classifier that predicts the roles of these statements. The resulting model works well: It outperforms strong baselines, even on datasets with fewer than 100 training examples.\n\nI don't see any major technical issues with this paper, and the results are strong. I am concerned, though, that the paper doesn't make a substantial novel contribution to representation learning. It focuses on ways to adapt reasonably mature techniques to a novel NLP problem. I think that one of the ACL conferences would be a better fit for this work.\n\nThe choice of a pointer network for this problem seems reasonable, though (as noted by other commenters) the paper does not make any substantial comparison with other possible ways of producing trees. The paper does a solid job at breaking down the results quantitatively, but I would appreciate some examples of model output and some qualitative error analysis.\n\nDetail notes: \n\n- Figure 2 appears to have an error. You report that the decoder produces a distribution over input indices only, but you show an example of the network pointing to an output index in one case.\n- I don't think \"Wei12\" is a name.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Here's My Point: Argumentation Mining with Pointer Networks
["Peter Potash", "Alexey Romanov", "Anna Rumshisky"]
One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary focus on classifying types of argument components. In order to solve this problem, we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.
["Natural language processing"]
https://openreview.net/forum?id=B1KBHtcel
https://openreview.net/pdf?id=B1KBHtcel
https://openreview.net/forum?id=B1KBHtcel&noteId=HkJF5ei7l
rJA1LgTQg
B1KBHtcel
ICLR.cc/2017/conference/-/paper484/official/review
{"title": "Review", "rating": "5: Marginally below acceptance threshold", "review": "This paper addresses the problem of argument mining, which consists of finding argument types and predicting the relationships between the arguments. The authors proposed a pointer network structure to recover the argument relations. They also propose modifications on pointer network to perform joint training on both type and link prediction tasks. Overall the model is reasonable, but I am not sure if ICLR is the best venue for this work.\n\nMy first concern of the paper is on the novelty of the model. Pointer network has been proposed before. The proposed multi-task learning method is interesting, but the authors only verified it on one task. This makes me feel that maybe the submission is more for a NLP conference rather than ICLR. \n\nThe authors stated that the pointer network is less restrictive compared to some of the existing tree predicting method. However, the datasets seem to only contain single trees or forests, and the stack-based method can be used for forest prediction by adding a virtual root node to each example (as done in the dependency parsing tasks). Therefore, I think the experiments right now cannot reflect the advantages of pointer network models unfortunately. \n\nMy second concern of the paper is on the target task. Given that the authors want to analyze the structures between sentences, is the argumentation mining the best dataset? For example, authors could verify their model by applying it to the other tasks that require tree structures such as dependency parsing. As for NLP applications, I found that the assumption that the boundaries of AC are given is a very strong constraint, and could potentially limit the usefulness of the proposed model. \n\nOverall, in terms of ML, I also feel that baseline methods the authors compared to are probably strong for the argument mining task, but not necessary strong enough for the general tree/forest prediction tasks (as there are other tree/forest prediction methods). In terms of NLP applications, I think the assumption of having AC boundaries is too restrictive, and maybe ICLR is not the best venture for this submission. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Here's My Point: Argumentation Mining with Pointer Networks
["Peter Potash", "Alexey Romanov", "Anna Rumshisky"]
One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary focus on classifying types of argument components. In order to solve this problem, we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.
["Natural language processing"]
https://openreview.net/forum?id=B1KBHtcel
https://openreview.net/pdf?id=B1KBHtcel
https://openreview.net/forum?id=B1KBHtcel&noteId=rJA1LgTQg
H1snDRS4e
ryTYxh5ll
ICLR.cc/2017/conference/-/paper538/official/review
{"title": "Interesting problem and good motivation, unconvincing solution architecture", "rating": "3: Clear rejection", "review": "The problem of utilizing all available information (across modalities) about a product to learn a meaningful \"joint\" embedding is an interesting one, and certainly seems like it a promising direction for improving recommender systems, especially in the \"cold start\" scenario. I'm unaware of approaches combining as many modalities as proposed in this paper, so an effective solution could indeed be significant. However, there are many aspects of the proposed architecture that seem sub-optimal to me:\n\n1. A major benefit of neural-network based systems is that the entire system can be trained end-to-end, jointly. The proposed approach sticks together largely pre-trained modules for different modalities... this can be justifiable when there is very little training data available on which to train jointly. With 10M product pairs, however, this doesn't seem to be the case for the Amazon dataset (although I haven't worked with this dataset myself so perhaps I'm missing something... either way it's not discussed at all in the paper). I consider the lack of a jointly fine-tuned model a major shortcoming of the proposed approach.\n\n2. The discussion of \"pairwise residual units\" is confusing and not well-motivated. The residual formulation (if I understand it correctly) applies a ReLU layer to the concatenation of the modality specific embeddings, giving a new similarity (after dot products) that can be added to the similarity obtained from the concatenation directly. Why not just have an additional fully-connected layer that mixes the modality specific embeddings to form a final embedding (perhaps of lower dimensionality)? This should at least be presented as a baseline, if the pairwise residual unit is claimed as a contribution... I don't find the provided explanation convincing (in what way does the residual approach reduce parameter count?).\n\n3. More minor: The choice of TextCNN for the text embedding vectors seems fine (although I wonder how an LSTM-based approach would perform)... However the details surrounding how it is used are obscured in the paper. In response to a question, the authors mention that it runs on the concatenation of the first 10 words of the title and product description. Especially for the description, this seems insufficiently long to contain a lot of information to me.\n\nMore care could be given to motivating the choices made in the paper. Finally, I'm not familiar with state of the art on this dataset... do the comparisons accurately reflect it? It seems only one competing technique is presented, with none on the more challenging cold-start scenarios.\n\nMinor detail: In the second paragraph of page 3, there is a reference that just says (cite Julian).", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
CONTENT2VEC: SPECIALIZING JOINT REPRESENTATIONS OF PRODUCT IMAGES AND TEXT FOR THE TASK OF PRODUCT RECOMMENDATION
["Thomas Nedelec", "Elena Smirnova", "Flavian Vasile"]
We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content infor- mation such as text and image and we analyze its performance on hard recom- mendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available we merge the product co-occurence information and propose a sec- ond architecture Content2vec+ and show its lift in performance versus non-hybrid approaches.
["Applications"]
https://openreview.net/forum?id=ryTYxh5ll
https://openreview.net/pdf?id=ryTYxh5ll
https://openreview.net/forum?id=ryTYxh5ll&noteId=H1snDRS4e
BkHGIg4Vx
ryTYxh5ll
ICLR.cc/2017/conference/-/paper538/official/review
{"title": "", "rating": "3: Clear rejection", "review": "This paper proposes combining different modalities of product content (e.g. review text, images, co-purchase info ...etc) in order to learn one unified product representation for recommender systems. While the idea of combining multiple sources of information is indeed an effective approach for handling data sparsity in recommender systems, I have some reservations on the approach proposed in this paper:\n\n1) Some modalities are not necessarily relevant for the recommendation task or item similarity. For example, cover images of books or movies (which are product types in the experiments of this paper) do not tell us much about their content. The paper should clearly motivate and show how different modalities contribute to the final task.\n\n2) The connection between the proposed joint product embedding and residual networks is a bit awkward. The original residual layers are composed of adding the original input vector to the output of an MLP, i.e. several affine transformations followed by non-linearities. These layers allow training very deep neural networks (up to 1000 layers) as a result of easier gradient flow. In contrast, the pairwise residual unit of this paper adds the dot product of two item vectors to the dot product of the same vectors but after applying a simple non-linearity. The motivation of this architecture is not very obvious, and is not well motivated in the paper.\n\n3) While it is a minor point, but the choice of the term embedding for the dot product of two items is not usual. Embeddings usually refer to vectors in R^n, and for specific entities. Here it refers to the final output, and renders the output layer in Figure 2 pointless.\n\nFinally, I believe the paper can be improved by focusing more on motivating architectural choices, and being more concise in your description. The paper is currently very long (11 pages) and I strongly encourage you to shorten it.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
CONTENT2VEC: SPECIALIZING JOINT REPRESENTATIONS OF PRODUCT IMAGES AND TEXT FOR THE TASK OF PRODUCT RECOMMENDATION
["Thomas Nedelec", "Elena Smirnova", "Flavian Vasile"]
We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content infor- mation such as text and image and we analyze its performance on hard recom- mendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available we merge the product co-occurence information and propose a sec- ond architecture Content2vec+ and show its lift in performance versus non-hybrid approaches.
["Applications"]
https://openreview.net/forum?id=ryTYxh5ll
https://openreview.net/pdf?id=ryTYxh5ll
https://openreview.net/forum?id=ryTYxh5ll&noteId=BkHGIg4Vx
rkEPBMlEe
ryTYxh5ll
ICLR.cc/2017/conference/-/paper538/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "The paper proposes a method to combine arbitrary content into recommender systems, such as images, text, etc. These various features have been previously used to improve recommender systems, though what's novel here is the contribution of a general-purpose framework to combine arbitrary feature types.\n\nPositively, the idea of combining many heterogeneous feature types into RS is ambitious and fairly novel. Previous works have certainly sought to include various feature types to improve RSs, though combining different features types successfully is difficult.\n\nNegatively, there are a few aspects of the paper that are a bit ad-hoc. In particular:\n-- There are a lot of pieces here being \"glued together\" to build the system. Different parts are trained separately and then combined together using another learning stage. There's nothing wrong with doing things in this way (and indeed it's the most straightforward and likely to work approach), but it pushes the contribution more toward the \"system building\" direction as opposed to the \"end-to-end learning\" direction which is more the focus of this conference.\n-- Further to the above, this makes it hard to say how easily the model would generalize to arbitrary feature types, say e.g. if I had audio or video features describing the item. To incorporate such features into the system would require a lot of implementation work, as opposed to being a system where I can just throw more features in and expect it to work.\n\nThe pre-review comments address some of these issues. Some of the responses aren't entirely convincing, e.g. it'd be better to have the same baselines across tables, rather than dropping some because \"the case had already been made elsewhere\".\n\nOther than that, I like the effort to combine several different feature types in real recommender systems datasets. I'm not entirely sure how strong the baselines are, they seem more like ablation-style experiments rather than comparison against any state-of-the-art RS.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
CONTENT2VEC: SPECIALIZING JOINT REPRESENTATIONS OF PRODUCT IMAGES AND TEXT FOR THE TASK OF PRODUCT RECOMMENDATION
["Thomas Nedelec", "Elena Smirnova", "Flavian Vasile"]
We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content infor- mation such as text and image and we analyze its performance on hard recom- mendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available we merge the product co-occurence information and propose a sec- ond architecture Content2vec+ and show its lift in performance versus non-hybrid approaches.
["Applications"]
https://openreview.net/forum?id=ryTYxh5ll
https://openreview.net/pdf?id=ryTYxh5ll
https://openreview.net/forum?id=ryTYxh5ll&noteId=rkEPBMlEe
r1O7mnrVg
SyZprb5xg
ICLR.cc/2017/conference/-/paper169/official/review
{"title": "A work that finds connections between existing theoretical results and the universal approximation theorem", "rating": "6: Marginally above acceptance threshold", "review": "This work finds a connection between Bourgain's junta problem, the existing results in circuit complexity, and the approximation of a boolean function using two-layer neural net. I think that finding connections between different fields and applying the insights gained is a valid contribution. For this reason, I recommend acceptance.\n\nBut my current major concern is that this work is only constrained to the domain of boolean hypercube, which is far from what is done in practice (continuous domain). Indeed, the authors could argue that understanding the former is a first step, but if the connection is only suitable for this case and not adaptable to more general scenarios, then it probably would have limited interest.", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
On Robust Concepts and Small Neural Nets
["Amit Deshpande", "Sushrut Karmalkar"]
The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input variables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise-stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function. We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general.
["Theory"]
https://openreview.net/forum?id=SyZprb5xg
https://openreview.net/pdf?id=SyZprb5xg
https://openreview.net/forum?id=SyZprb5xg&noteId=r1O7mnrVg
B18esDeVe
SyZprb5xg
ICLR.cc/2017/conference/-/paper169/official/review
{"title": "This paper provides an analog of the universal approximation theorem where the size of the network depends on a notion of noise-stability instead of the dimension.", "rating": "5: Marginally below acceptance threshold", "review": "The approximation capabilities of neural networks have been studied before for approximating different classes of functions. The goal of this paper is to provide an analog of the approximation theorem for the class of noise-stable functions. The class of functions that are noise-stable and their output does not significantly depend on an individual input seems an interesting class and therefore I find the problem definition interesting. The paper is well-written and it is easy to follow the proofs and arguments. \n\nI have two major comments:\n\n1- Presentation: The way I understand this arguments is that the noise-stability measures the \"true\" dimensionality of the data based on the dependence of the function on different dimensions. Therefore, it is possible to restate and prove an analog to the approximation theorems based on \"true\" dimensionality of data. It is also unclear when the stability based bounds are tighter than dimension based bounds as both of them grow exponentially. I find these discussions interesting but unfortunately, the authors present the result as some bound that does not depend on the dimension and a constant (!??) that grows exponentially with (1/eps). This is not entirely the right picture because the epsilon in the stability could itself depend on the dimension. I believe in most problems (1/epsilon) grows with the dimension. \n\n2- Contribution: Even though the connection is new and interesting, the contribution of the paper is not significant enough. The presented results are direct applications of previous works and most of the lemmas in the paper are restating the known results. I believe more discussions and results need to be added to make this a complete work.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
On Robust Concepts and Small Neural Nets
["Amit Deshpande", "Sushrut Karmalkar"]
The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input variables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise-stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function. We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general.
["Theory"]
https://openreview.net/forum?id=SyZprb5xg
https://openreview.net/pdf?id=SyZprb5xg
https://openreview.net/forum?id=SyZprb5xg&noteId=B18esDeVe
r1tGF0HEe
SyZprb5xg
ICLR.cc/2017/conference/-/paper169/official/review
{"title": "review of ``ON ROBUST CONCEPTS AND SMALL NEURAL NETS''", "rating": "5: Marginally below acceptance threshold", "review": "SUMMARY \nThis paper presents a study of the number of hidden units and training examples needed to learn functions from a particular class. \nThis class is defined as those Boolean functions with an upper bound on the variability of the outputs. \n\nPROS\nThe paper promotes interesting results from the theoretical computer science community to investigate the efficiency of representation of functions with limited variability in terms of shallow feedforward networks with linear threshold units. \n\nCONS \nThe analysis is limited to shallow networks. The analysis is based on piecing together interesting results, however without contributing significant innovations. \nThe presentation of the main results and conclusions is somewhat obscure, as the therein appearing terms/constants do not express a clear relation between increased robustness and decreasing number of required hidden units. \n\nCOMMENTS \n\n- In the abstract one reads ``The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights.'' \n\nIn page 1 the paper points the reader to a review article. It could be a good idea to include also more recent references. \n\nGiven the motivation presented in the abstract of the paper it would be a good idea to also comment of works discussing the classes of Boolean functions representable by linear threshold networks. \nFor instance the paper [Hyperplane Arrangements Separating Arbitrary Vertex Classes in n-Cubes. Wenzel, Ay, Paseman] discusses various classes of functions that can be represented by shallow linear threshold networks and provides upper and lower bounds on the number of hidden units needed for representing various types of Boolean functions. In particular that paper also provides lower bounds on the number of hidden units needed to define a universal approximator. \n\n- It certainly would be a good idea to discuss the results on the learning complexity in terms of measures such as the VC-dimension. \n\n- Thank you for the explanations regarding the constants. \nSo if the noise sensitivity is kept constant, larger values of epsilon are associated with a smaller value of delta and of 1/epsilon. \nNonetheless, the description in Theorem 2 is in terms of poly(1/epsilon, 1/delta), which still could increase? \nAlso, in Lemma 1 reducing the sensitivity at a constant noise increases the bound on k? \n\n- The fact that the descriptions are independent of n seems to be related to the definition of the noise sensitivity as an expectation over all inputs. This certainly deserves more discussion. One good start could be to discuss examples of functions with an upper bound on the noise sensitivity (aside from the linear threshold functions discussed in Lemma 2). \nAlso, reverse statements to Lemma 1 would be interesting, describing the noise sensitivity of juntas specifically, even if only as simple examples. \n\n- On page 3 ``...variables is polynomial in the noise-sensitivity parameters'' should be inverse of?\n\nMINOR COMMENTS\n\nOn page 5 Proposition 1 should be Lemma 1? \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
On Robust Concepts and Small Neural Nets
["Amit Deshpande", "Sushrut Karmalkar"]
The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input variables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise-stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function. We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general.
["Theory"]
https://openreview.net/forum?id=SyZprb5xg
https://openreview.net/pdf?id=SyZprb5xg
https://openreview.net/forum?id=SyZprb5xg&noteId=r1tGF0HEe
Skn0jlpXg
ryWKREqxx
ICLR.cc/2017/conference/-/paper227/official/review
{"title": "Review", "rating": "5: Marginally below acceptance threshold", "review": "The paper proposed to analyze several recently developed machine readers and found that some machine readers could potentially take advantages of the entity marker (given that the same marker points out to the same entity). I usually like analysis papers, but I found the argument proposed in this paper not very clear.\n\nI like the experiments on the Stanford reader, which shows that the entity marker in fact helps the Stanford reader on WDW. I found that results rather interesting.\n\nHowever, I found the organization and the overall message of this paper quite confusing. First of all, it feels that the authors want to explain the above behavior with some definition of the \u201cstructures\u201d. However, I am not sure that how successful the attempt is. For me, it is still not clear what the structures are. This makes reading section 4 a bit frustrating. \n\nI am also not sure what is the take home message of this paper. Does it mean that the entity marking should be used in the MR models? Should we design models that can also model the entity reference at the same time? What are the roles of the linguistic features here? Should we use linguistic structure to overcome the reference issue?\n\nOverall, I feel that the analysis is interesting, but I feel that the paper can benefit from having a more focused argument.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Emergent Predication Structure in Vector Representations of Neural Readers
["Hai Wang", "Takeshi Onishi", "Kevin Gimpel", "David McAllester"]
Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
["Natural language processing", "Deep learning", "Applications"]
https://openreview.net/forum?id=ryWKREqxx
https://openreview.net/pdf?id=ryWKREqxx
https://openreview.net/forum?id=ryWKREqxx&noteId=Skn0jlpXg
ByaPPS7Vl
ryWKREqxx
ICLR.cc/2017/conference/-/paper227/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This paper aims to provide an insightful and analytic survey over the recent literature on reading comprehension with the distinct goal of investigating whether logical structure (or predication, as the authors rephrased in their response) arises in many of the recent models. I really like the spirit of the paper and appreciate the efforts to organize rather chaotic recent literature into two unified themes: \"aggregation readers\" and \"explicit reference models\u201d. Overall the quality of writing is great and section 3 was especially nice to read. I\u2019m also happy with the proposed rewording from \"logical structure\" to \u201cpredication\", and the clarification by the authors was detailed and helpful.\n\nI think I still have slight mixed feelings about the contribution of the work. First, I wonder whether the choice of the dataset was ideal in the first place to accomplish the desired goal of the paper. There have been concerns about CNN/DailyMail dataset (Chen et al. ACL\u201916) and it is not clear to me whether the dataset supports investigation on logical structure of interesting kinds. Maybe it is bound to be rather about lack of logical structure.\n\nSecond, I wish the discussion on predication sheds more practical insights into dataset design or model design to better tackle reading comprehension challenges. In that sense, it may have been more helpful if the authors could make more precise analysis on different types of reading comprehension challenges, what types of logical structure are lacking in various existing models and datasets, and point to specific directions where the community needs to focus more.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Emergent Predication Structure in Vector Representations of Neural Readers
["Hai Wang", "Takeshi Onishi", "Kevin Gimpel", "David McAllester"]
Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
["Natural language processing", "Deep learning", "Applications"]
https://openreview.net/forum?id=ryWKREqxx
https://openreview.net/pdf?id=ryWKREqxx
https://openreview.net/forum?id=ryWKREqxx&noteId=ByaPPS7Vl
HkB55bwVx
ryWKREqxx
ICLR.cc/2017/conference/-/paper227/official/review
{"title": "Simple predicate structure and data set", "rating": "6: Marginally above acceptance threshold", "review": "The paper aims to consolidate some recent literature in simple types of \"reading comprehension\" tasks involving matching questions to answers to be found in a passage, and then to explore the types of structure learned by these models and propose modifications. These reading comprehension datasets such as CNN/Daily Mail are on the simpler side because they do not generally involve chains of reasoning over multiple pieces of supporting evidence as can be found in datasets like MCTest. Many models have been proposed for this task, and the paper breaks down these models into \"aggregation readers\" and \"explicit reference readers.\" The authors show that the aggregation readers organize their hidden states into a predicate structure which allows them to mimic the explicit reference readers. The authors then experiment with adding linguistic features, including reference features, to the existing models to improve performance.\n\nI appreciate the re-naming and re-writing of the paper to make it more clear that the aggregation readers are specifically learning a predicate structure, as well as the inclusion of results about dimensionality of the symbol space. Further, I think the effort to organize and categorize several different reading comprehension models into broader classes is useful, as the field has been producing many such models and the landscape is unclear. \n\nThe concerns with this paper are that the predicate structure demonstrated is fairly simple, and it is not clear that it provides insight towards the development of better models in the future, since the \"explicit reference readers\" need not learn it, and the CNN/Daily Mail dataset has very little headroom left as demonstrated by Chen et al. 2016. The desire for \"dramatic improvements in performance\" mentioned in the discussion section probably cannot be achieved on these datasets. More complex datasets would probably involve multi-hop inference which this paper does not discuss. Further, the message of the paper is a bit scattered and hard to parse, and could benefit from a bit more focus.\n\nI think that with the explosion of various competing neural network models for NLP tasks, contributions like this one which attempt to organize and analyze the landscape are valuable, but that this paper might be better suited for an NLP conference or journal such as TACL.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Emergent Predication Structure in Vector Representations of Neural Readers
["Hai Wang", "Takeshi Onishi", "Kevin Gimpel", "David McAllester"]
Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a “predicate vector” P and a “constant symbol vector” c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.
["Natural language processing", "Deep learning", "Applications"]
https://openreview.net/forum?id=ryWKREqxx
https://openreview.net/pdf?id=ryWKREqxx
https://openreview.net/forum?id=ryWKREqxx&noteId=HkB55bwVx
H1WuMnSVx
r1LXit5ee
ICLR.cc/2017/conference/-/paper519/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This is a very interesting and timely paper, with multiple contributions. \n- it proposes a setup for dealing with combinatorial perception and action-spaces that generalizes to an arbitrary number of units and opponent units,\n- it establishes some deep RL baseline results on a collection of Starcraft subdomains,\n- it proposes a new algorithm that is a hybrid between black-box optimization REINFORCE, and which facilitates consistent exploration.\n\n\nAs mentioned in an earlier comment, I don\u2019t see why the \u201cgradient of the average cumulative reward\u201d is a reasonable choice, as compared to just the average reward? This over-weights late rewards at the expense of early ones, so the updates are not matching the measured objective. The authors state that they \u201cdid not observe a large difference in preliminary experiments\u201d -- so if that is the case, then why not choose the correct objective?\n\nDPQ is characterized incorrectly: despite its name, it does not \u201ccollect traces by following deterministic policies\u201d, instead it follows a stochastic behavior policy and learns off-policy about the deterministic policy. Please revise this. \n\nGradient-free optimization is also characterized incorrectly (\u201cit only scales to few parameters\u201d), recent work has shown that this can be overcome (e.g. the TORCS paper by Koutnik et al, 2013). This also suggests that your \u201cpreliminary experiments with direct exploration in the parameter space\u201d may not have followed best practices in neuroevolution? Did you try out some of the recent variants of NEAT for example, which have been applied to similar domains in the past?\n\nOn the specific results, I\u2019m wondering about the DQN transfer from m15v16 to m5v5, obtaining the best win rate of 96% in transfer, despite only reaching 13% (the worst) on the training domain? Is this a typo, or how can you explain that?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement
["Nicolas Usunier", "Gabriel Synnaeve", "Zeming Lin", "Soumith Chintala"]
We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ε-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=r1LXit5ee
https://openreview.net/pdf?id=r1LXit5ee
https://openreview.net/forum?id=r1LXit5ee&noteId=H1WuMnSVx
SJcQahSEg
r1LXit5ee
ICLR.cc/2017/conference/-/paper519/official/review
{"title": "Final Review: Nice new application of zeroth order optimization for structured exploration. Complex domain, and good results", "rating": "7: Good paper, accept", "review": "The paper presents a learning algorithm for micromanagement of battle scenarios in real-time strategy games. It focuses on a complex sub-problem of the full RTS problem. The assumptions and restrictions made (greedy MDP, distance-based action encoding, etc.) are clear and make sense for this problem.\n\nThe main contribution of this paper is the zero-order optimization algorithm and how it is used for structured exploration. This is a nice new application of zero-order optimization meets deep learning for RL, quite well-motivated using similar arguments as DPG. The results show clear wins over vanilla Q-learning and REINFORCE, which is not hard to believe. Although RTS is a very interesting and challenging domain (certainly worthy as a domain of focused research!), it would have been nice to see results on other domains, mainly because it seems that this algorithm could be more generally applicable than just RTS games. Also, evaluation on such a complex domain makes it difficult to predict what other kinds of domains would benefit from this zero-order approach. Maybe the authors could add some text to clarify/motivate this.\n\nThere are a few seemingly arbitrary choices that are justified only by \"it worked in practice\". For example, using only the sign of w / Psi_{theta}(s^k, a^k). Again later: \"Also we neglected the argmax operation that chooses the actions\". I suppose this and dividing by t could keep things nicely within or close to [-1,1] ? It might make sense to try truncating/normalizing w/Psi; it seems that much information must be lost when only taking the sign. Also lines such as \"We did not extensively experiment with the structure of the network, but we found the maxpooling and tanh nonlinearity to be particularly important\" and claiming the importance of adagrad over RMSprop without elaboration or providing any details feels somewhat unsatisfactory and leaves the reader wondering why.. e.g. could these only be true in the RTS setup in this paper?\n\nThe presentation of the paper can be improved, as some ideas are presented without any context making it unnecessarily confusing. For example, when defining f(\\tilde{s}, c) at the top of page 5, the w vector is not explained at all, so the reader is left wondering where it comes from or what its use is. This is explained later, of course, but one sentence on its role here would help contextualize its purpose (maybe refer later to the section where it is described fully). Also page 7: \"because we neglected that a single u is sampled for an entire episode\"; actually, no, you did mention this in the text above and it's clear from the pseudo-code too.\n\n\"perturbated\" -> \"perturbed\"\n\n--- After response period: \n\nNo rebuttal entered, therefore review remains unchanged.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement
["Nicolas Usunier", "Gabriel Synnaeve", "Zeming Lin", "Soumith Chintala"]
We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ε-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=r1LXit5ee
https://openreview.net/pdf?id=r1LXit5ee
https://openreview.net/forum?id=r1LXit5ee&noteId=SJcQahSEg
rJULd_G4x
r1LXit5ee
ICLR.cc/2017/conference/-/paper519/official/review
{"title": "Topically relevant work, likely of significant interest", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This work introduces some StarCraft micro-management tasks (controlling individual units during a battle). These tasks are difficult for recent DeepRL methods due to high-dimensional, variable action spaces (the action space is the task of each unit, the number of units may vary). In such large action spaces, simple exploration strategies (such as epsilon-greedy) perform poorly.\n\nThey introduce a novel algorithm ZO to tackle this problem. This algorithm combines ideas from policy gradient, deep networks trained with backpropagation for state embedding and gradient free optimization. The algorithm is well explained and is compared to some existing baselines. Due to the gradient free optimization providing for much better structured exploration, it performs far better.\n\nThis is a well-written paper and a novel algorithm which is applied to a very relevant problem. After the success of DeepRL approaches at learning in large state spaces such as visual environment, there is significant interest in applying RL to more structured state and action spaces. The tasks introduced here are interesting environments for these sorts of problems.\n\nIt would be helpful if the authors were able to share the source code / specifications for their tasks, to allow other groups to compare against this work.\n\nI found section 5 (the details of the raw inputs and feature encodings) somewhat difficult to understand. In addition to clarifying, the authors might wish to consider whether they could provide the source code to their algorithm or at least the encoder to allow careful comparisons by other work.\n\nAlthough discussed, there is no baseline comparison with valued based approaches with attempt to do better exploration by modeling uncertainty (such as Bootstrapped DQN). It would useful to understand how such approaches, which also promise better exploration, compare.\n\nIt would also be interesting to discuss whether action embedding models such as energy-based approaches (e.g. https://arxiv.org/pdf/1512.07679v2.pdf, http://www.jmlr.org/papers/volume5/sallans04a/sallans04a.pdf ) or continuous action embeddings (https://arxiv.org/pdf/1512.07679v2.pdf ) would provide an alternative approach for structured exploration in these action spaces.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Episodic Exploration for Deep Deterministic Policies for StarCraft Micromanagement
["Nicolas Usunier", "Gabriel Synnaeve", "Zeming Lin", "Soumith Chintala"]
We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., ε-greedy exploration. Experiments show that this algorithm allows to successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.
["Deep learning", "Reinforcement Learning", "Games"]
https://openreview.net/forum?id=r1LXit5ee
https://openreview.net/pdf?id=r1LXit5ee
https://openreview.net/forum?id=r1LXit5ee&noteId=rJULd_G4x
HkkRFeH4x
By14kuqxx
ICLR.cc/2017/conference/-/paper450/official/review
{"title": "Good read, some questions about performance in practice.", "rating": "7: Good paper, accept", "review": "Interesting and timely paper. Lots of new neural network accelerators popping up.\n\nI'm not an expert in this domain and to familiarize myself with the topic, I browsed through related work and skimmed the DaDianNao paper.\nMy main question is about the choice of technology. What struck me is that your paper contains very few implementation details except for the technology (PRA 65nm vs DaDianNao 28nm). \nCombined with the fact that the main improvement of your work appears to be performance rather than energy efficiency, I was wondering about the maximum clock estimated frequency of the PRA implementation due to the added complexity? Based on the explanation in the methodology section, I assume that the performance comparison is based on number of clock cycles. Do you have any numbers/estimates about the performance in practices (taking into account clock frequency)?\n\n\n", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Bit-Pragmatic Deep Neural Network Computing
["Jorge Albericio", "Patrick Judd", "Alberto Delmas", "Sayeh Sharify", "Andreas Moshovos"]
We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragrmatic (PRA), an architecture that exploits it improving performance and energy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms, that is, products of the multiplicand and powers of two, which added together produce the final product. At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non-zero terms resulting in a design whose execution time for convolutional layers is ideally proportional to the number of activation bits that are 1. Measurements demonstrate that for the convolutional layers on Convolutional Neural Networks and during inference, PRA improves performance by 4.3x over the DaDiaNao (DaDN) accelerator and by 4.5x when DaDN uses an 8-bit quantized representation. DaDN was reported to be 300x faster than commodity graphics processors.
["Deep learning", "Applications"]
https://openreview.net/forum?id=By14kuqxx
https://openreview.net/pdf?id=By14kuqxx
https://openreview.net/forum?id=By14kuqxx&noteId=HkkRFeH4x
ByPiBkOre
By14kuqxx
ICLR.cc/2017/conference/-/paper450/official/review
{"title": "An interesting but very narrow DNN hardware accelerator", "rating": "6: Marginally above acceptance threshold", "review": "The paper presents a hardware accelerator architecture for deep neural network inference, and a simulated performance evaluation thereof. The central idea of the proposed architecture (PRA) revolves around the fact that the regular (parallel) MACC operations waste a considerable amount of area/power to perform multiplications with zero bits. Since in the DNN scenario, one of the multiplicands (the weight) is known in advance, the multiplications by the zero digits can be eliminated without affecting the calculation and lowest non-zero bits can be further dropped at the expense of precision. The paper proposes an architecture exploiting this simple idea implementing bit-serial evaluation of multiplications with throughput depending on the number of non-zero bits in each weight. \n\nWhile the idea is in general healthy, it is limited to fixed point arithmetics. Nowadays, DNNs trained on regular graphics hardware have been shown to work well in floating point down to single (32bit) and even half-precision (16bit) in many cases with little or no additional adjustments. However, this is generally not true for 16bit (not mentioning 8bit) fixed point. Since it is not trivial to quantize a network to 16 or 8 bits using standard learning, recent efforts have shown successful incorporation of quantization into the training process. One of the extreme cases showed quanitzation to 1bit weights with negligible loss in performance (arXiv:1602.02830). 1-bit DNNs involve no multiplication at all; moreover, the proposed multiplier-dependent representation of multiplication discussed in the present paper can be implemented as a 1-bit DNN. I think it would be very helpful if the authors could address the advantages their architecture brings to the evaluation of 1-bit DNNs. \n\nTo summarize, I believe that immediately useful hardware DNN accelerators still need to operate in floating point (a good example are Movidius chips -- nowadays Intel). Fixed point architectures promise additional efficiency and are important in low-power applications, but they depend very much on what has been done at training. In view of this -- and this is my own extreme opinion -- it makes sense to build an architecture for 1-bit DNNs. I have the impression that the proposed architecture could be very suitable for this, but the devil is in the details and currently evidence is missing to make such claims.\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Bit-Pragmatic Deep Neural Network Computing
["Jorge Albericio", "Patrick Judd", "Alberto Delmas", "Sayeh Sharify", "Andreas Moshovos"]
We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragrmatic (PRA), an architecture that exploits it improving performance and energy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms, that is, products of the multiplicand and powers of two, which added together produce the final product. At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non-zero terms resulting in a design whose execution time for convolutional layers is ideally proportional to the number of activation bits that are 1. Measurements demonstrate that for the convolutional layers on Convolutional Neural Networks and during inference, PRA improves performance by 4.3x over the DaDiaNao (DaDN) accelerator and by 4.5x when DaDN uses an 8-bit quantized representation. DaDN was reported to be 300x faster than commodity graphics processors.
["Deep learning", "Applications"]
https://openreview.net/forum?id=By14kuqxx
https://openreview.net/pdf?id=By14kuqxx
https://openreview.net/forum?id=By14kuqxx&noteId=ByPiBkOre