note_id
stringlengths
9
12
forum_id
stringlengths
9
13
invitation
stringlengths
40
95
content
stringlengths
44
35k
type
stringclasses
1 value
year
stringclasses
7 values
venue
stringclasses
171 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
2
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
forum_url
stringlengths
41
45
pdf_url
stringlengths
39
43
review_url
stringlengths
58
64
SkSNwcVEl
H1GEvHcee
ICLR.cc/2017/conference/-/paper257/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "The authors propose a novel energy-function for RBMs, using the leaky relu max(cx, x) activation function for the hidden-units. Analogous to ReLU units in feed-forward networks, these leaky relu RBMs split the input space into a combinatorial number of regions, where each region defines p(v) as a truncated Gaussian. A further contribution of the paper is in proposing a novel sampling scheme for the leaky RBM: one can run a much shorter Markov chain by initializing it from a sample of the leaky RBM with c=1 (which yields a standard multi-variate normal over the visibles) and then slowly annealing c. In low-dimension a similar scheme is shown to outperform AIS for estimating the partition function. Experiments are performed on both CIFAR-10 and SVHN.\n\nThis is an interesting paper which I believe would be of interest to the ICLR community. The theoretical contributions are strong: the authors not only introduce a proper energy formulation of ReLU RBMs, but also a novel sampling mechanism and an improvement on AIS for estimating their partition function. \n\nUnfortunately, the experimental results are somewhat limited. The PCD baseline is notably absent. Including (bernoulli visible, leaky-relu hidden) would have allowed the authors to evaluate likelihoods on standard binary RBM datasets. As it stands, performance on CIFAR-10 and SVHN, while improved with leaky-relu, is a far cry from more recent generative models (VAE-based, or auto-regressive models). While this comparison may be unfair, it will certainly limit the wider appeal of the paper to the community. Furthermore, there is the issue of the costly projection method which is required to guarantee that the energy-function remain bounded (covariance matrix over each region be PSD). Again, while it may be fair to leave that for future work given the other contributions, this will further limit the appeal of the paper.\n\nPROS:\nIntroduces an energy function having the leaky-relu as an activation function\nIntroduces a novel sampling procedure based on annealing the leakiness parameter\nSimilar sampling scheme shown to outperform AIS\n\nCONS:\nResults are somewhat out of date\nMissing experiments on binary datasets (more comparable to prior RBM work)\nMissing PCD baseline\nCost of projection method\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Annealing Gaussian into ReLU: a New Sampling Strategy for Leaky-ReLU RBM
["Chun-Liang Li", "Siamak Ravanbakhsh", "Barnabas Poczos"]
Restricted Boltzmann Machine (RBM) is a bipartite graphical model that is used as the building block in energy-based deep generative models. Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units. Here, we consider an alternative member of exponential family RBM with leaky rectified linear units -- called leaky RBM. We first study the joint and marginal distributions of leaky RBM under different leakiness, which provides us important insights by connecting the leaky RBM model and truncated Gaussian distributions. The connection leads us to a simple yet efficient method for sampling from this model, where the basic idea is to anneal the leakiness rather than the energy; -- i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakiness over iterations. This serves as an alternative to the annealing of the temperature parameter and enables numerical estimation of the likelihood that are more efficient and more accurate than the commonly used annealed importance sampling (AIS). We further demonstrate that the proposed sampling algorithm enjoys faster mixing property than contrastive divergence algorithm, which benefits the training without any additional computational cost.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=H1GEvHcee
https://openreview.net/pdf?id=H1GEvHcee
https://openreview.net/forum?id=H1GEvHcee&noteId=SkSNwcVEl
Hyd8QeSVl
H1GEvHcee
ICLR.cc/2017/conference/-/paper257/official/review
{"title": "A new model of RBM is proposed, where the conditional of the hidden is a leaky ReLU. In addition an annealed AIS sampler is also proposed to test the learned models quantifiably", "rating": "5: Marginally below acceptance threshold", "review": "\nBased on previous work such as the stepped sigmoid units and ReLU hidden units for discriminatively trained supervised models, a Leaky-ReLU model is proposed for generative learning.\n\nPro: what is interesting is that unlike the traditional way of first defining an energy function and then deriving the conditional distributions, this paper propose the forms of the conditional first and then derive the energy function. However this general formulation is not novel to this paper, but was generalized to exponential family GLMs earlier.\n\nCon: \nBecause of the focus on specifying the conditionals, the joint pdf and the marginal p(v) becomes complicated and hard to compute.\n\nOn the experiments, it would been nice to see a RBM with binary visbles and leaky ReLu for hiddens. This would demonstrate the superiority of the leaky ReLU hidden units. In addition, there are more results on binary MNIST modeling with which the authors can compare the results to. While the authors is correct that the annealing distribution is no longer Gaussian, perhaps CD-25 or (Faast) PCD experiments can be run to compare agains the baseline RBM trained using (Fast) PCD.\n\nThis paper is interesting as it combines new hidden function with the easiness of annealed AIS sampling, However, the baseline comparisons to Stepped Sigmoid Units (Nair &Hinton) or other models like the spike-and-slab RBMs (and others) are missing, without those comparisons, it is hard to tell whether leaky ReLU RBMs are better even in continuous visible domain.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Annealing Gaussian into ReLU: a New Sampling Strategy for Leaky-ReLU RBM
["Chun-Liang Li", "Siamak Ravanbakhsh", "Barnabas Poczos"]
Restricted Boltzmann Machine (RBM) is a bipartite graphical model that is used as the building block in energy-based deep generative models. Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units. Here, we consider an alternative member of exponential family RBM with leaky rectified linear units -- called leaky RBM. We first study the joint and marginal distributions of leaky RBM under different leakiness, which provides us important insights by connecting the leaky RBM model and truncated Gaussian distributions. The connection leads us to a simple yet efficient method for sampling from this model, where the basic idea is to anneal the leakiness rather than the energy; -- i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakiness over iterations. This serves as an alternative to the annealing of the temperature parameter and enables numerical estimation of the likelihood that are more efficient and more accurate than the commonly used annealed importance sampling (AIS). We further demonstrate that the proposed sampling algorithm enjoys faster mixing property than contrastive divergence algorithm, which benefits the training without any additional computational cost.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=H1GEvHcee
https://openreview.net/pdf?id=H1GEvHcee
https://openreview.net/forum?id=H1GEvHcee&noteId=Hyd8QeSVl
r1Cybi8Ex
HyenWc5gx
ICLR.cc/2017/conference/-/paper529/official/review
{"title": "Review", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a method for transfer learning, i.e. leveraging a network trained on some original task A in learning a new task B, which not only improves performance on the new task B, but also tries to avoid degradation in performance on A. The general idea is based on encouraging a model trained on A, while training on the new task B, to match fake targets produced by the model itself but when it is trained only on the original task A.\nExperiments show that this method can help in improving the result on task B, and is better than other baselines, including standard fine-tuning.\n\n\nGeneral comments/questions:\n- As far as I can tell, there is no experimental result supporting the claim that your model still performs well on the original task. All experiments show that you can improve on the new task only. \n- The introduction makes a strong statements about the distilling logical rule engine into a neural network, which I find a bit misleading. The approach in the paper is not specific to transferring from logical rules (as stated in the Sec 2) and is simply relying on the rule engine to provide labels for unlabelled data.\n- One of the obvious baselines to compare with your approach is standard multi-task learning on both tasks A and B together. That is, you train the model from scratch on both tasks simultaneously (which sharing parameters). It is not clear this is the same as what is referred to in Sec. 8 as \"joint training\". Can you please explain more clearly what you refer to as joint training?\n- Why can't we find the same baselines in both Table 2 and Table 3? For example Table 2 is missing \"joint training\", and Table 3 is missing GRU trained on the target task.\n- While the idea is presented as a general method for transfer learning, experiments are focused on one domain (sentiment analysis on SemEval task). I think that either experiments should include applying the idea on at least one other different domain, or the writing of the paper should be modified to make the focus more specific to this domain/task.\n\n\nWriting comments\n- The writing of the paper in general needs some improvement, but more specifically in the experiment section, where experiment setting and baselines should be explained more concisely.\n- Ensemble methodology paragraph does not fit the flow of the paper. I would rather explain it in the experiments section, rather than including it as part of your approach.\n- Table 1 seems like reporting cross-validation results, and I do not think is very informative to general reader.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning
["Matthew Riemer", "Elham Khabiri", "Richard Goodwin"]
Although neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forgetting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Surprisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target task data leads to a substantial gain in generalization performance. Our experiments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al., 2016) dataset we surpass the state of the art established during the competition with a comparatively simple model architecture that is not even competitive when trained on only the labeled task specific data.
["Deep learning", "Transfer Learning", "Natural language processing"]
https://openreview.net/forum?id=HyenWc5gx
https://openreview.net/pdf?id=HyenWc5gx
https://openreview.net/forum?id=HyenWc5gx&noteId=r1Cybi8Ex
ryO3U0GNe
HyenWc5gx
ICLR.cc/2017/conference/-/paper529/official/review
{"title": "Interesting work, quite domain-specific, suboptimal focus and structure", "rating": "6: Marginally above acceptance threshold", "review": "This paper introduces a new method for transfer learning that avoids the catastrophic forgetting problem. \nIt also describes an ensembling strategy for combining models that were learned using transfer learning from different sources.\nIt puts all of this together in the context of recurrent neural networks for text analytics problems, to achieve new state-of-the-art results for a subtask of the SemEval 2016 competition.\nAs the paper acknowledges, 1.5% improvement over the state-of-the-art is somewhat disappointing considering that it uses an ensemble of 5 quite different networks.\n\nThese are interesting contributions, but due to the many pieces, unfortunately, the paper does not seem to have a clear focus. From the title and abstract/conclusion I would've expected a focus on the transfer learning problem. However, the description of the authors' approach is merely a page, and its evaluation is only another page. In order to show that this idea is a new methodological advance, \nit would've been good to show that it also works in at least one other application (e.g., just some multi-task supervised learning problem). Rather, the paper takes a quite domain-specific approach and discusses the pieces the authors used to obtain state-of-the-art performance for one problem. That is OK, but I would've rather expected that from a paper called something like \"Improved knowledge transfer and distillation for text analytics\". If accepted, I encourage the authors to change the title to something along those lines.\n\nThe many pieces also made it hard for me to follow the authors' train of thought. I'm sure the authors had a good reason for their section ordering, but I didn't see the red thread in it. How about re-organizing the sections as follows to discuss one contribution at a time?\n1,2,4,3,8 including 6, put 9 into an appendix and point to it from here, 7, 5, 10. That would first discuss the transfer learning piece (4, and experiments potentially in a subsection with previous sections 3,8,6), then discuss the distillation of logical rules (7), and then discuss ensembling and experiments for it (5 and 10). One clue that the current structure is suboptimal is that there are 11 sections...\n\nI like the authors' idea for transfer learning without catastropic forgetting, and I must admit I would've rather liked to read a paper solely about that (studying where it works, and where it fails) than about the many other topics of the paper. I weakly vote for acceptance since I like the ideas, but if the paper does not make it in, I would suggest that the authors consider splitting it into two papers, each of which could hopefully be more focused. \n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning
["Matthew Riemer", "Elham Khabiri", "Richard Goodwin"]
Although neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forgetting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Surprisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target task data leads to a substantial gain in generalization performance. Our experiments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al., 2016) dataset we surpass the state of the art established during the competition with a comparatively simple model architecture that is not even competitive when trained on only the labeled task specific data.
["Deep learning", "Transfer Learning", "Natural language processing"]
https://openreview.net/forum?id=HyenWc5gx
https://openreview.net/pdf?id=HyenWc5gx
https://openreview.net/forum?id=HyenWc5gx&noteId=ryO3U0GNe
r1ECvAH4g
HyenWc5gx
ICLR.cc/2017/conference/-/paper529/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper proposes a regularization technique for neural network training that relies on having multiple related tasks or datasets in a transfer learning setting. The proposed technique is straightforward to describe and can also leverage external labeling systems perhaps based on logical rules. The paper is clearly written and the experiments seem relatively thorough. \n\nOverall this is a nice paper but does not fully address how robust the proposed technique is. For each experiment there seems to be a slightly different application of the proposed technique, or a lot of ensembling and cross validation. I can\u2019t figure out if this is because the proposed technique does not work well in general and thus required a lot of fiddling to get right in experiments, or if this is simply an artifact of ad-hoc experiments to try and get the best performance overall. If more datasets or addressing this issue directly in discussion was able to show this the strengths and limitations of the proposed technique more clearly, this could be a great paper. \n\nOverall the proposed method seems nice and possibly useful for other problems. However in the details of logical rule distillation and various experiment settings it seems like there is a lot of running the model many times or selecting a particular way of reusing the models and data that makes me wonder how robust the technique is or whether it requires a lot of trying various approaches, ensembling, or picking the best model from cross validation to show real gains. The authors could help by discussing this explicitly for all experiments in one place rather than listing the various choices / approaches in each experiment. As an example, these sorts of phrases make me very unsure how reliable the method is in practice versus how much the authors had to engineer this regularizer to perform well:\n\u201cWe noticed that equation 8 is actually prone to overfitting away from a good solution on the test set although it often finds a pretty good one early in training. \u201c\n\nThe introduction section should first review the definitions of transfer learning vs multi-task learning to make the discussion more clear. It also deems justification why \u201ccatastrophic forgetting\u201d is actually a problem. If the final target task is the only thing of interest then forgetting the source task is not an issue and the authors should motivate why forgetting matters in their setting. This paper explores sequential transfer so it\u2019s not obvious why forgetting the source task matters.\n\nSection 7 introduces the logical rules engine in a fairly specific context. Rather it would be good state more generally what this system entails to help people figure out how this method would apply to other problems.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning
["Matthew Riemer", "Elham Khabiri", "Richard Goodwin"]
Although neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forgetting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Surprisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target task data leads to a substantial gain in generalization performance. Our experiments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al., 2016) dataset we surpass the state of the art established during the competition with a comparatively simple model architecture that is not even competitive when trained on only the labeled task specific data.
["Deep learning", "Transfer Learning", "Natural language processing"]
https://openreview.net/forum?id=HyenWc5gx
https://openreview.net/pdf?id=HyenWc5gx
https://openreview.net/forum?id=HyenWc5gx&noteId=r1ECvAH4g
Bk3Efq-Nl
HJjiFK5gx
ICLR.cc/2017/conference/-/paper513/official/review
{"title": "Progress in reducing the supervision required by NPI", "rating": "7: Good paper, accept", "review": "Neural Programmer-Interpreters (NPI) achieves greatly reduced sample complexity and better generalization than flat seq2seq models for program induction, but requires program traces at multiple levels of abstraction for training, which is a very strong form of supervision. One obvious way to improve this situation, addressed in this work, is to only train on the lowest-level traces, with a latent compositional program structure. This makes sense because the \"raw\" low-level traces can be cheaply gathered in many cases just by watching expert demonstrations, without being explicitly told the more temporally abstract structures.\n\nThis paper shows that a variant of NPI, named NPL, can achieve even better generalization performance with weaker supervision (mostly flat traces), and also extends the model to a new grid world task. Unfortunately, it still requires being told the overall program structure by being given a few *full* execution traces. Still, I see this as important progress. It extends NPI in a quite nontrivial way by introducing a stack mechanism modeling the latent program call structure, which makes the training process much more closely match what the model does at test time. The results tell us that flat execution traces can take us almost all the way toward learning compositional programs from demonstrations - the hard part is of course learning to actually discover the subprogram structure.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Neural Program Lattices
["Chengtao Li", "Daniel Tarlow", "Alexander L. Gaunt", "Marc Brockschmidt", "Nate Kushman"]
We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hierarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=HJjiFK5gx
https://openreview.net/pdf?id=HJjiFK5gx
https://openreview.net/forum?id=HJjiFK5gx&noteId=Bk3Efq-Nl
rkLqZvB4g
HJjiFK5gx
ICLR.cc/2017/conference/-/paper513/official/review
{"title": "Review", "rating": "4: Ok but not good enough - rejection", "review": "First I would like to apologize for the late review.\n\nThis paper proposes an extension of the NPI model (Reed & de Freitas) by using an extension of the probabilistic stacks introduced in Mikolov et al.. This allows them to train their model with less supervision than Reed & de Freitas. \n\nOverall the model is a nice extension of NPI. While it requires less supervision than NPI, it still requires \"sequences of elementary operations paired with environment observations, and [...] a couple of examples which include the full abstraction hierarchy\". This may limit the scope of this work.\n\nThe paper claims that their \"method is leverages stronger supervision in the form of elementary action sequences rather than just input-output examples (sic). Such sequences are relatively easy to gather in many natural settings\". It would be great if the authors clarify what they mean by \"relatively easy to gather in many natural settings\". They also claim that \"the additional supervision improves the data efficiency and allow our technique to scale to more complicated problems\". However, this paper only addresses two toy problems which are neither \"natural settings\" nor of a large scale (or at least not larger than those addressed in the related literature, see Zaremba et al. for addition). \n\nIn the introduction, the author states that \"Existing techniques, however, cannot be applied on data like this because it does not contain the abstraction hierarchy.\" What are the \"existing techniques\", they are referring to? This work only addresses the problem of long addition and puzzle solving in a block world. Afaik, Zaremba et al. has shown that with no supervision, it can solve the long addition problem and Sukhbaatar et al. (\"Mazebase: A sandbox for learning from games\") shows that a memory network can solve puzzles in a blockworld with little supervision.\n\nIn the conclusion, the author states that \"remarkably, NPL achieves state-of-the-art performances with much less supervision compared to existing models, making itself more applicable to real-world applications where full program traces are hard to get.\" However for all the experiments, they \"include a small number of FULL samples\" (FULL == \"samples with full program traces\"). Unfortunately even if this means that they need less FULL examples, they still need \"full program traces\", contradicting their final claim. Moreover, as shown figure 7, their model does not use a \"small number of FULL samples\" but rather a significantly smaller amount of FULL examples than NPI, i.e., 16 vs 128. \n\n\"All experiments were run with 10 different random seeds\": does the environment change as well between the runs, i.e. are the FULL examples different between the runs? If it is the case and since you select the best run (on a validation set), the NPL model does not consume 16 FULL examples but 160 FULL examples for nanoCraft. \n\nConcerning the NanoCraft example, it would be good to have more details about how the examples are generated: how do you make sure that the train/val/test sets are different? How the rectangular shape are generated? If I consider all possible rectangles in a 6x6 grid, there are (6x6)x(6x6)/2 = 648 possibilities, thus taking 256 examples sum up to ~40% of the total number of rectangles. This does not even account for the fact that from an initial state, many rectangles can be made, making my estimate probably lower than the real coverage of examples.\n\nConcerning the addition, it would interesting to show what an LSTM would do: Take a 2 layer LSTM that takes the 2 current digits as an input and produce the current output ( \"123+45\" would be input[0] = [3,5], input[1]=[2,4], input[2]=[1, 0] and output[0] = 8...). I would be curious to see how such baseline would work. It can be trained on input/output and it is barely different from a standard sequence model. Also, would it be possible to compare with Zaremba et al.?\n\nFinally, as discussed previously with the authors, it would be good if they discuss more in length the relation between their probabilistic stacks and Mikolov et al.. They have a lot of similarities and it is not addressed in the current version. It should be addressed in the section describing the approach. I believe the authors agreed on this and I will wait for the updated version.\n\nOverall, it is a nice extension of Reed & de Freitas, but I'm a bit surprised by the lack of discussion about the rest of the literature (beside Reed & de Freitas, most previous work are only lightly discussed in the related work). This would have been fine if this paper would not suffer from a relatively weak experiment section that does not support the claims made in this work or show results that were not obtained by others before. \n\nMissing references:\n\"Learning simple arithmetic procedures\", Cottrell et al.\n\"Neural gpus learn algorithms\", Kaiser & Sutskever\n\"Mazebase: A sandbox for learning from games\", Sukhbaatar et al.\n\"Learning simple algorithms from examples\", Zaremba et al.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Program Lattices
["Chengtao Li", "Daniel Tarlow", "Alexander L. Gaunt", "Marc Brockschmidt", "Nate Kushman"]
We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hierarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=HJjiFK5gx
https://openreview.net/pdf?id=HJjiFK5gx
https://openreview.net/forum?id=HJjiFK5gx&noteId=rkLqZvB4g
SylfwJf4g
HJjiFK5gx
ICLR.cc/2017/conference/-/paper513/official/review
{"title": "well formulated paper", "rating": "7: Good paper, accept", "review": "The paper presents the Neural Program Lattice (NPL), extending the previous Neural Programmer-Interpreters (NPI). The main idea is to generalize stack manipulation of NPI by making it probabilistic. This allows the content of the stack to be stochastic than deterministic, and the paper describes the feed-forward steps of NPL's program inference similar to the NPI formulation. A new objective function is provided to train the model that maximizes the probability of NPL model correctly predicting operation sequences, from execution traces. We believe this is an important extension. The experimental results illustrate that the NPL is able to learn task executions in a clean setting with perfect observations.\n\nThe paper is clearly presented and its background literature (i.e., NPI) is well covered. We also believe the paper is presenting a conceptually/technically meaningful extension of NPI, which will be of interest to a broad audience. We are still a bit concerned whether the NPL would be directly applicable for noisy observations (e.g., human skeletons) in a continuous space with less explicit structure, so more discussions will be interesting.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Program Lattices
["Chengtao Li", "Daniel Tarlow", "Alexander L. Gaunt", "Marc Brockschmidt", "Nate Kushman"]
We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hierarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=HJjiFK5gx
https://openreview.net/pdf?id=HJjiFK5gx
https://openreview.net/forum?id=HJjiFK5gx&noteId=SylfwJf4g
SkyCWALEe
HJOZBvcel
ICLR.cc/2017/conference/-/paper380/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "I sincerely apologize for the late-arriving review. \n\nThis paper proposes to frame the problem of structure estimation as a supervised classification problem. The input is an empirical covariance matrix of the observed data, the output the binary decision whether or not two variables share a link. The paper is sufficiently clear, the goals are clear and everything is well described. \n\nThe main interesting point is the empirical results of the experimental section. The approach is simple and performs better than previous non-learning based methods. This observation is interesting and will be of interest in structure discovery problems. \n\nI rate the specific construction of the supervised learning method as a reasonable attempt attempt to approach this problem. There is not very much technical novelty in this part. E.g., an algorithmic contribution would have been a method that is invariant to data permutation could have been a possible target for a technical contribution. The paper makes no claims on this technical part, as said, the method is well constructed and well executed. \n\nIt is good to precisely state the theoretical parts of a paper, the authors do this well. All results are rather straight-forward, I like that the claims are written down, but there is little surprise in the statements. \n\nIn summary, the paper makes a very interesting observation. Graph estimation can be posed as a supervised learning problem and training data from a separate source is sufficient to learn structure in novel and unseen test data from a new source. Practically this may be relevant, on one hand the empirical results are stronger with this method, on the other hand a practitioner who is interested in structural discovery may have side constraints about interpretability of the deriving method. From the Discussion and Conclusion I understand that the authors consider this as future work. It is a good first step, it could be stronger but also stands on its own already.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Discover Sparse Graphical Models
["Eugene Belilovsky", "Kyle Kastner", "Gael Varoquaux", "Matthew B. Blaschko"]
We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally, our learnable graph-discovery method trained on synthetic data generalizes well to different data: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive(and generally superior) performance, compared with analytical methods.
["sparse graphical models", "structure discovery", "priors", "competitive", "undirected graphical models", "observational data", "likely structures", "examples", "complex task", "formulation"]
https://openreview.net/forum?id=HJOZBvcel
https://openreview.net/pdf?id=HJOZBvcel
https://openreview.net/forum?id=HJOZBvcel&noteId=SkyCWALEe
B1aSUyJEg
HJOZBvcel
ICLR.cc/2017/conference/-/paper380/official/review
{"title": "Advantage of the proposed method", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a new method for learning graphical models. Combined with a neural network architecture, some sparse edge structure is estimated via sampling methods. In introduction, the authors say that a problem in graphical lasso is model selection. However, the proposed method still implicitly includes model selection. In the proposed method, $P(G)$ is a sparse prior, and should include some hyper-parameters. How do you tune the hyper-parameters? Is this tuning an equivalent problem to model section? Therefore, I do not understand real advantage of this method over previous methods. What is the advantage of the proposed method?\n\nAnother concern is that this paper is unorganized. In Algorithm 1, first, G_i and \\Sigma_i are sampled, and then x_j is sampled from N(0, \\Sigma). Here, what is \\Sigma? Is it different from \\Sigma_i? Furthermore, how do you construct (Y_i, \\hat{\\Sigma}_i) from (G_i, X_i )? Finally, I have a simple question: Where is input data X (not sampled data) is used in Algorithm 1?\n\nWhat is the definition of the receptive field in Proposition 2 and Proposition 3?\n", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
Learning to Discover Sparse Graphical Models
["Eugene Belilovsky", "Kyle Kastner", "Gael Varoquaux", "Matthew B. Blaschko"]
We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally, our learnable graph-discovery method trained on synthetic data generalizes well to different data: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive(and generally superior) performance, compared with analytical methods.
["sparse graphical models", "structure discovery", "priors", "competitive", "undirected graphical models", "observational data", "likely structures", "examples", "complex task", "formulation"]
https://openreview.net/forum?id=HJOZBvcel
https://openreview.net/pdf?id=HJOZBvcel
https://openreview.net/forum?id=HJOZBvcel&noteId=B1aSUyJEg
BkF-pCWVl
HJOZBvcel
ICLR.cc/2017/conference/-/paper380/official/review
{"title": "Interesting algorithm to estimate sparse graph structure", "rating": "7: Good paper, accept", "review": "The paper proposes a novel algorithm to estimate graph structures by using a convolutional neural network to approximate the function that maps from empirical covariance matrix to the sparsity pattern of the graph. Compared with existing approaches, the new algorithm can adapt to different network structures, e.g. small-world networks, better under the same empirical risk minimization framework. Experiments on synthetic and real-world datasets show promising results compared with baselines.\n\nIn general, I think it is an interesting and novel paper. The idea of framing structure estimation as a learning problem is especially interesting and may inspire further research on related topics. The advantage of such an approach is that it allows easier adaptation to different network structure properties without designing specific regularization terms as in graph lasso.\n\nThe experiment results are also promising. In both synthetic and real-world datasets, the proposed algorithm outperforms other baselines in the small sample region. \n\nHowever, the paper can be made clearer in describing the network architectures. For example, in page 5, each o^k_{i,j} is said be a d-dimensional vector. But from the context, it seems o^k_{i,j} is a scalar (from o^0_{i,j} = p_{i,j}). It is not clear what o^k_{i,j} is exactly and what d is. Is it the number of channels for the convolutional filters?\n\nFigure 1 is also quite confusing. Why in (b) the table is 16 x 16 whereas in (a) there are only six nodes? And from the figure, it seems there is only one channel in each layer? What do the black squares represent and why are there three blocks of them. There are some descriptions in the text, but it is still not clear what they mean exactly.\n\nFor real-world data, how are the training data (Y, Sigma) generated? Are they generated in the same way as in the synthetic experiments where the entries are uniformly sparse? This is also related to the more general question of how to sample from the distribution P, in the case of real-world data.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Discover Sparse Graphical Models
["Eugene Belilovsky", "Kyle Kastner", "Gael Varoquaux", "Matthew B. Blaschko"]
We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally, our learnable graph-discovery method trained on synthetic data generalizes well to different data: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive(and generally superior) performance, compared with analytical methods.
["sparse graphical models", "structure discovery", "priors", "competitive", "undirected graphical models", "observational data", "likely structures", "examples", "complex task", "formulation"]
https://openreview.net/forum?id=HJOZBvcel
https://openreview.net/pdf?id=HJOZBvcel
https://openreview.net/forum?id=HJOZBvcel&noteId=BkF-pCWVl
ryztRFW4e
BJVEEF9lx
ICLR.cc/2017/conference/-/paper475/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "The paper presents a framework to formulate data-structures in a learnable way. It is an interesting and novel approach that could generalize well to interesting datastructures and algorithms. In its current state (Revision of Dec. 9th), there are two strong weaknesses remaining: analysis of related work, and experimental evidence.\n\nReviewer 2 detailed some of the related work already, and especially DeepMind (which I am not affiliated with) presented some interesting and highly related results with its neural touring machine and following work. While it may be of course very hard to make direct comparisons in the experimental section due to complexity of the re-implementation, it would at least be very important to mention and compare to these works conceptually.\n\nThe experimental section shows mostly qualitative results, that do not (fully) conclusively treat the topic. Some suggestions for improvements:\n* It would be highly interesting to learn about the accuracy of the stack and queue structures, for increasing numbers of elements to store.\n* Can a queue / stack be used in arbitrary situations of push-pop operations occuring, even though it was only trained solely with consecutive pushes / consecutive pops? Does it in this enhanced setting `diverge' at some point?\n* The encoded elements from MNIST, even though in a 28x28 (binary?) space, are elements of a ten-element set, and can hence be encoded a lot more efficiently just by `parsing' them, which CNNs can do quite well. Is the NN `just' learning to do that? If so, its performance can be expected to strongly degrade when having to learn to stack more than 28*28/4=196 numbers (in case of an optimal parser and loss-less encoding). To argue more in this direction, experiments would be needed with an increasing number of stack / queue elements. Experimenting with an MNIST parsing NN in front of the actual stack/queue network could help strengthening or falsifying the claim.\n* The claims about `mental representations' have very little support throughout the paper. If indication for correspondence to mental models, etc., could be found, it would allow to hold the claim. Otherwise, I would remove it from the paper and focus on the NN aspects and maybe mention mental models as motivation.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Approximate Distribution-Sensitive Data Structures
["Zenna Tavares", "Armando Solar-Lezama"]
We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity. Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior. We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability. We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree.
["Unsupervised Learning"]
https://openreview.net/forum?id=BJVEEF9lx
https://openreview.net/pdf?id=BJVEEF9lx
https://openreview.net/forum?id=BJVEEF9lx&noteId=ryztRFW4e
Sy20Q1MNl
BJVEEF9lx
ICLR.cc/2017/conference/-/paper475/official/review
{"title": "Interesting direction, but not there yet.", "rating": "4: Ok but not good enough - rejection", "review": "A method for training neural networks to mimic abstract data structures is presented. The idea of training a network to satisfy an abstract interface is very interesting and promising, but empirical support is currently too weak. The paper would be significantly strengthened if the method could be shown to be useful in a realistic application, or be shown to work better than standard RNN approaches on algorithmic learning tasks.\n\nThe claims about mental representations are not well supported. I would remove the references to mind and brain, as well as the more philosophical points, or write a paper that really emphasizes one of these aspects and supports the claims.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Approximate Distribution-Sensitive Data Structures
["Zenna Tavares", "Armando Solar-Lezama"]
We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity. Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior. We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability. We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree.
["Unsupervised Learning"]
https://openreview.net/forum?id=BJVEEF9lx
https://openreview.net/pdf?id=BJVEEF9lx
https://openreview.net/forum?id=BJVEEF9lx&noteId=Sy20Q1MNl
ryg9PB-Vg
BJVEEF9lx
ICLR.cc/2017/conference/-/paper475/official/review
{"title": "Review", "rating": "3: Clear rejection", "review": "The paper presents a way to \"learn\" approximate data structures. They train neural networks (ConvNets here) to perform as an approximate abstract data structure by having an L2 loss (for the unrolled NN) on respecting the axioms of the data structure they want the NN to learn. E.g. you NN.push(8), NN.push(6), NN.push(4), the loss is proportional to the distance with what is NN.pop()ed three times and 4, 6, 8 (this example is the one of Figure 1).\n\nThere are several flaws:\n - In the case of the stack: I do not see a difference between this and a seq-to-seq RNN trained with e.g. 8, 6, 4 as input sequence, to predict 4, 6, 8.\n - While some of the previous work is adequately cited, there is an important body of previous work (some from the 90s) on learning Peano's axioms, stacks, queues, etc. that is not cited nor compared to. For instance [Das et al. 1992], [Wiles & Elman 1995], and more recently [Graves et al. 2014], [Joulin & Mikolov 2015], [Kaiser & Sutskever 2016]...\n - Using MNIST digits, and not e.g. a categorical distribution on numbers, is adding complexity for no reason.\n - (Probably the biggest flaw) The experimental section is too weak to support the claims. The figures are adequate, but there is no comparison to anything. There is also no description nor attempt to quantify a form of \"success rate\" of learning such data structures, for instance w.r.t the number of examples, or w.r.t to the size of the input sequences. The current version of the paper (December 9th 2016) provides, at best, anecdotal experimental evidence to support the claims of the rest of the paper.\n\nWhile an interesting direction of research, I think that this paper is not experimentally sound enough for ICLR.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Approximate Distribution-Sensitive Data Structures
["Zenna Tavares", "Armando Solar-Lezama"]
We present a computational model of mental representations as data-structures which are distribution sensitive, i.e., which exploit non-uniformity in their usage patterns to reduce time or space complexity. Abstract data types equipped with axiomatic specifications specify classes of concrete data structures with equivalent logical behavior. We extend this formalism to distribution-sensitive data structures with the concept of a probabilistic axiomatic specification, which is implemented by a concrete data structure only with some probability. We employ a number of approximations to synthesize several distribution-sensitive data structures from probabilistic specification as deep neural networks, such as a stack, queue, natural number, set, and binary tree.
["Unsupervised Learning"]
https://openreview.net/forum?id=BJVEEF9lx
https://openreview.net/pdf?id=BJVEEF9lx
https://openreview.net/forum?id=BJVEEF9lx&noteId=ryg9PB-Vg
rk_Zn-G4x
S1Jhfftgx
ICLR.cc/2017/conference/-/paper79/official/review
{"title": "Not very convincing", "rating": "3: Clear rejection", "review": "This paper proposes a way of enforcing constraints (or penalizing violations of those constraints) on outputs in structured prediction problems, while keeping inference unconstrained. The idea is to tweak the neural network parameters to make those output constraints hold. The underlying model is that of structured prediction energy networks (SPENs), recently proposed by Belanger et al. \n\nOverall, I didn't find the approach very convincing and the paper has a few problems regarding the empirical evaluation. There's also some imprecisions throughout. The proposed approach (secs 6 and 7) looks more like a \"little hack\" to try to make it vaguely similar to Lagrangian relaxation methods than something that is theoretically well motivated.\n\nBefore eq. 6: \"an exponential number of dual variables\" -- why exponential? it's not one dual variable per output.\n\nFrom the clarification questions:\n- The accuracy reported in Table 1 needs to be explained. \n- for the parsing experiments it would be good to report the usual F1 metric of parseval, and to compare with state of the art systems.\n- should use the standard training/dev/test splits of the Penn Treebank.\nThe reported conversion rate in Table 1 does not tell us how many violations are left by the unconstrained decoder to start with. It would be good to know what happens in highly structured problems where these violations are frequent, since these are the problems where the proposed approach could be more beneficial.\n\n\nMinor comments/typos:\n- sec.1: \"there are\" -> there is?\n- sec 1: \"We find that out method is able to completely satisfy constraints on 81% of the outputs.\" -> at this point, without specifying the problem, the model, and the constraints, this means very little. How many constrains does the unconstrained method satisfies?\n- sec 2 (last paragraph): \"For RNNs, each output depends on hidden states that are functions of previous output values\" -- this is not very accurate, as it doesn't hold for general RNNs, but only for those (e.g. RNN decoders in language modeling) where the outputs are fed back to the input in the next time frame. \n- sec 3: \"A major advantage of neural networks is that once trained, inference is extremely efficient.\" -- advantage over what? also, this is not necessarily true, depends on the network and on its size.\n- sec 3: \"our goal is take advantage\" -> to take advantage\n- last paragraph of sec 6: \"the larger model affords us\" -> offers?\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Enforcing constraints on outputs with unconstrained inference
["Jay Yoon Lee", "Michael L. Wick", "Jean-Baptiste Tristan"]
Increasingly, practitioners apply neural networks to complex problems in natural language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 81\%, while improving accuracy.
["Natural language processing", "Structured prediction", "Deep learning"]
https://openreview.net/forum?id=S1Jhfftgx
https://openreview.net/pdf?id=S1Jhfftgx
https://openreview.net/forum?id=S1Jhfftgx&noteId=rk_Zn-G4x
rJfGR9LEg
S1Jhfftgx
ICLR.cc/2017/conference/-/paper79/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "This paper attempted to solve an interesting problem -- incorporating hard constraints in seq2seq model. The main idea is to modify the weight of the neural network in order to find a feasible solution. Overall, the idea presented in the paper is interesting, and it tries to solve an important problem. However, it seems to me the paper is not ready to publish yet.\n\nComments:\n\n- The first section of the paper is clear and well-motivated. \n\n- The authors should report test running time. The proposed approach changes the weight matrix. As a result, it needs to reevaluate the values of hidden states and perform the greedy search for each iteration of optimizing Eq (7). This is actually pretty expensive in comparison to running the beam search or other inference methods. Therefore, I'm not convinced that the proposed approach is a right direction for solving this problem (In table, 1, the authors mention that they run 100 steps of SGD). \n\n- If I understand correctly, Eq (7) is a noncontinuous function w.r.t W_\\lambda and the simple SGD algorithm will not be able to find its minimum.\n\n- For dependency parsing, there are standard splits of PTB. I would suggest the authors follow the same splits of train, dev, and test in order to compare with existing results. \n\n\nMinor comments: several sentences are misleading and should be rewritten carefully. \n\n- Beginning of Section 3: \"A major advantage of neural network is that once trained, inference is extremely efficient.\" This sentence is not generally right, and I guess the authors mean if using greedy search as inference method, the inference is efficient. \n\n- The description in the end of section 2 is awkward. To me, feed-forward and RNN are general families that cover many specific types of neural networks, and the training procedures are not necessarily to aim to optimize Eq. (2). Therefore, the description here might not be true. In fact, I don't think there is a need to bring up feed-forward networks here; instead, the authors should provide more details the connection between RNN and Eq (2) here.\n\n- The second paragraph of section 3 is related to [1], where it shows the search space of the inference can be represented as an imperative program. \n\t\n\n\n[1] Credit assignment compiler for joint prediction, NIPS 2016\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Enforcing constraints on outputs with unconstrained inference
["Jay Yoon Lee", "Michael L. Wick", "Jean-Baptiste Tristan"]
Increasingly, practitioners apply neural networks to complex problems in natural language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 81\%, while improving accuracy.
["Natural language processing", "Structured prediction", "Deep learning"]
https://openreview.net/forum?id=S1Jhfftgx
https://openreview.net/pdf?id=S1Jhfftgx
https://openreview.net/forum?id=S1Jhfftgx&noteId=rJfGR9LEg
H13e_DgNg
S1Jhfftgx
ICLR.cc/2017/conference/-/paper79/official/review
{"title": "Reject", "rating": "3: Clear rejection", "review": "This paper proposes a dual-decomposition-inspired technique for enforcing constraints in neural network prediction systems.\n\nMany things don't quite make sense to me:\n 1. Most seq2seq models (such as those used for parsing) have substantially better performance when coupled with beam search than greedy search, and exact search is infeasible. This is because these models are trained to condition on discrete values of past outputs in each timestamp, and hence the problem of finding the highest-scoring total sequence of outputs is not solvable efficiently. It's unclear what kind of model this paper is using which allows for greedy decoding, and how well it compares to the state-of-the-art, specially when constraint-aware beam search is used. This comparison is specially interesting because both constrained beam search and this dual-decomposition-like approach require multiple computations of the model's score.\n 2. It's unclear (to me at least) how to differentiate the constraint term g() in the objective function in the general case (though the particular example used here is understandable)\n 3. The paper claims that \"Lagrangian relaxation methods for NLP have multipliers for each output variable that can be combined with linear models [...] . Since our non-linear functions and global constraints do not afford us the same ability\" but it is possible to add linear terms to the outputs of neural networks, possibly avoiding rerunning all the expensive inference terms.\n\nMoreover, the justification for the particular method is hand-wavy at best, with inconvenient terms from equations ignored or changed at will. At this point it might be better to omit the attempted theoretical explanation and just present this method as a heuristic which is likely to achieve the desired result.\n\nThis, plus the concerns around lack of clear comparisons with baselines on benchmark problems lead me to recommend rejection. Further explanation of how this compares with beam search, how this relates to the state-of-the-art, and a better explanation for how to come up with differentiable constraint sets, are probably required for acceptance.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Enforcing constraints on outputs with unconstrained inference
["Jay Yoon Lee", "Michael L. Wick", "Jean-Baptiste Tristan"]
Increasingly, practitioners apply neural networks to complex problems in natural language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 81\%, while improving accuracy.
["Natural language processing", "Structured prediction", "Deep learning"]
https://openreview.net/forum?id=S1Jhfftgx
https://openreview.net/pdf?id=S1Jhfftgx
https://openreview.net/forum?id=S1Jhfftgx&noteId=H13e_DgNg
SJXnoez4e
BJ46w6Ule
ICLR.cc/2017/conference/-/paper36/official/review
{"title": "Improve the exposition", "rating": "6: Marginally above acceptance threshold", "review": "The goal of this paper is to learn \u201c a collection of experts that are individually\nmeaningful and that have disjoint responsibilities.\u201d Unlike a standard mixture model, they \u201cuse a different mixture for each dimension d.\u201d While the results seem promising, the paper exposition needs significant improvement.\n\nComments:\n\nThe paper jumps in with no motivation at all. What is the application, or even the algorithm, or architecture that this is used for? This should be addressed at the beginning.\n\nThe subsequent exposition is not very clear. There are assertions made with no justification, e.g. \u201cthe experts only have a small variance for some subset of the variables while the variance of the other variables is large.\u201d \n\nSince you\u2019re learning both the experts and the weights, can this be rephrased in terms of dictionary learning? Please discuss the relevant related literature.\n\nThe horse data set is quite small with respect to the feature dimension, and so the conclusions may not necessarily generalize.\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Dynamic Partition Models
["Marc Goessling", "Yali Amit"]
We present a new approach for learning compact and intuitive distributed representations with binary encoding. Rather than summing up expert votes as in products of experts, we employ for each variable the opinion of the most reliable expert. Data points are hence explained through a partitioning of the variables into expert supports. The partitions are dynamically adapted based on which experts are active. During the learning phase we adopt a smoothed version of this model that uses separate mixtures for each data dimension. In our experiments we achieve accurate reconstructions of high-dimensional data points with at most a dozen experts.
["experts", "data points", "new", "compact", "intuitive", "representations", "binary", "expert votes", "products"]
https://openreview.net/forum?id=BJ46w6Ule
https://openreview.net/pdf?id=BJ46w6Ule
https://openreview.net/forum?id=BJ46w6Ule&noteId=SJXnoez4e
S1zNjzGNg
BJ46w6Ule
ICLR.cc/2017/conference/-/paper36/official/review
{"title": "Potentially interesting paper, but not clear enough", "rating": "3: Clear rejection", "review": "The paper addresses the problem of learning compact binary data representations. I have a hard time understanding the setting and the writing of the paper is not making it any easier. For example I can't find a simple explanation of the problem and I am not familiar with these line of research. I read all the responses provided by authors to reviewer's questions and re-read the paper again and I still do not fully understand the setting and thus can't really evaluate the contributions of these work. The related work section does not exist and instead the analysis of the literature is somehow scattered across the paper. There are no derivations provided. Statements often miss references, e.g. the ones in the fourth paragraph of Section 3. This makes me conclude that the paper still requires significant work before it can be published.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Dynamic Partition Models
["Marc Goessling", "Yali Amit"]
We present a new approach for learning compact and intuitive distributed representations with binary encoding. Rather than summing up expert votes as in products of experts, we employ for each variable the opinion of the most reliable expert. Data points are hence explained through a partitioning of the variables into expert supports. The partitions are dynamically adapted based on which experts are active. During the learning phase we adopt a smoothed version of this model that uses separate mixtures for each data dimension. In our experiments we achieve accurate reconstructions of high-dimensional data points with at most a dozen experts.
["experts", "data points", "new", "compact", "intuitive", "representations", "binary", "expert votes", "products"]
https://openreview.net/forum?id=BJ46w6Ule
https://openreview.net/pdf?id=BJ46w6Ule
https://openreview.net/forum?id=BJ46w6Ule&noteId=S1zNjzGNg
HyRxUhRQg
BJ46w6Ule
ICLR.cc/2017/conference/-/paper36/official/review
{"title": "A type of PoE but the probability seems undefined and the EM algorithms remains obscure. Experiments are illustrative only. ", "rating": "3: Clear rejection", "review": "This paper proposes a new kind of expert model where a sparse subset of most reliable experts is chosen instead of the usual logarithmic opinion pool of a PoE.\nI find the paper very unclear. I tried to find a proper definition of the joint model p(x,z) but could not extract this from the text. The proposed \u201cEM-like\u201d algorithm should then also follow directly from this definition. At this point I do not see if such as definition even exists. In other words, is there is an objective function on which the iterates of the proposed algorithm are guaranteed to improve on the train data?\nWe also note that the \u201cproduct of unifac models\u201d from Hinton tries to do something very similar where only a subset of the experts will get activated to generate the input: http://www.cs.toronto.edu/~hinton/absps/tr00-004.pdf\nI tried to derive the update rule on top of page 4 from the \u201cconditional objective for p(x|h)\u201d in sec. 3.2 But I am getting something different (apart form the extra smoothing factors eps and mu_o). Does this follow? (If we define R=R_nk, mu-mu_k and X=X_n, I get mu = (XR)*inv(R^TR) as the optimal solution, which then needs to be projected back onto the probability simplex).\nThe experiments are only illustrative. They don\u2019t compare with other methods (such as an RBM or VAE) nor do they give any quantitative results. We are left with eyeballing some images. I have no idea whether what we see is impressive or not. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Dynamic Partition Models
["Marc Goessling", "Yali Amit"]
We present a new approach for learning compact and intuitive distributed representations with binary encoding. Rather than summing up expert votes as in products of experts, we employ for each variable the opinion of the most reliable expert. Data points are hence explained through a partitioning of the variables into expert supports. The partitions are dynamically adapted based on which experts are active. During the learning phase we adopt a smoothed version of this model that uses separate mixtures for each data dimension. In our experiments we achieve accurate reconstructions of high-dimensional data points with at most a dozen experts.
["experts", "data points", "new", "compact", "intuitive", "representations", "binary", "expert votes", "products"]
https://openreview.net/forum?id=BJ46w6Ule
https://openreview.net/pdf?id=BJ46w6Ule
https://openreview.net/forum?id=BJ46w6Ule&noteId=HyRxUhRQg
S1qqrWz4l
HJ9rLLcxg
ICLR.cc/2017/conference/-/paper297/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "The concept of data augmentation in the embedding space is very interesting. The method is well presented and also justified on different tasks such as spoken digits and image recognition etc.\n\nOne comments of the comparison is the use of a simple 2-layer MLP as the baseline model throughout all the tasks. It's not clear whether the gains maintain when a more complex baseline model is used. \n\nAnother comment is that the augmented context vectors are used for classification, just wondering how does it compare to using the reconstructed inputs. And furthermore, as in Table 4, both input and feature space extrapolation improves the performance, whether these two are complementary or not? ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Dataset Augmentation in Feature Space
["Terrance DeVries", "Graham W. Taylor"]
Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJ9rLLcxg
https://openreview.net/pdf?id=HJ9rLLcxg
https://openreview.net/forum?id=HJ9rLLcxg&noteId=S1qqrWz4l
rk7Sgr-Eg
HJ9rLLcxg
ICLR.cc/2017/conference/-/paper297/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "In this paper authors propose a novel data augmentation scheme where instead of augmenting the input data, they augment intermediate feature representations. Sequence auto-encoder based features are considered, and random perturbation, feature interpolation, and extrapolation based augmentation are evaluated. On three sequence classification tasks and on MNIST and CIFAR-10, it is shown that augmentation in feature space, specifically extrapolation based augmentation, results in good accuracy gains w.r.t. authors baseline.\n\nMy main questions and suggestions for further strengthening the paper are:\n\na) The proposed data augmentation approach is applied to a learnt auto-encoder based feature space termed \u2018context vector\u2019 in the paper. The context vectors are then augmented and used as input to train classification models. Have the authors considered applying their feature space augmentation idea directly to the classification model during training, and applying it to potentially many layers of the model? Also, have the authors considered convolutional neural network (CNN) architectures as well for feature space augmentation? CNNs are now the state-of-the-art in many image and sequence classification task, it would be very valuable to see the impact of the proposed approach in that model.\n\nb) When interpolation or extrapolation based augmentation was being applied, did the authors also consider utilizing nearby samples from competing classes as well? Especially in case of extrapolation based augmentation it will be interesting to check if the extrapolated features are closer to competing classes than original ones.\n\nc) With random interpolation or nearest neighbor interpolation based augmentation the accuracy seems to degrade pretty consistently. This is counter-intuitive. Do the authors have explanation for why the accuracy degraded with interpolation based augmentation?\n\nd) The results on MNIST and CIFAR-10 are inconclusive. For instance the error rate on CIFAR-10 is well below 10% these days, so I think it is hard to draw conclusions based on error rates above 30%. For MNIST it is surprising to see that data augmentation in the input space substantially degrades the accuracy (1.093% -> 1.477%). As mentioned above, I think this will require extending the feature space augmentation idea to CNN based models.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Dataset Augmentation in Feature Space
["Terrance DeVries", "Graham W. Taylor"]
Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJ9rLLcxg
https://openreview.net/pdf?id=HJ9rLLcxg
https://openreview.net/forum?id=HJ9rLLcxg&noteId=rk7Sgr-Eg
ByUItzz4g
HJ9rLLcxg
ICLR.cc/2017/conference/-/paper297/official/review
{"title": "review", "rating": "4: Ok but not good enough - rejection", "review": "TDLR: The authors present a regularization method wherein they add noise to some representation space. The paper mainly applies the technique w/ sequence autoencoders (Dai et al., 2015) without the usage of attention (i.e., only using the context vector). Experimental results show improvement from author's baseline on some toy tasks.\n\n=== Augmentation ===\nThe augmentation process is simple enough, take the seq2seq context vector and add noise/interpolate/extrapolate to it (Section 3.2). This reviewer is very curious whether this process will also work in non seq2seq applications. \n\nThis reviewer would have liked to see comparison with dropout on the context vector.\n\n=== Experiments ===\nSince the authors are experimenting w/ seq2seq architectures, its a little bit disappointing they didn't compare it w/ Machine Translation (MT), where there are many published papers to compare to.\n\nThe authors did compare their method on several toy datasets (that are less commonly used in DL literature) and MNIST/CIFAR. The authors show improvement over their own baselines on several toy datasets. The improvement on MNIST/CIFAR over the author's baseline seems marginal at best. The author also didn't cite/compare to the baseline published by Dai et al., 2015 for CIFAR -- here they have a much better LSTM baseline of 25% for CIFAR which beats the author's baseline of 32.35% and the author's method of 31.93%.\n\nThe experiments would be much more convincing if they did it on seq2seq+MT on say EN-FR or EN-DE. There is almost no excuse why the experiments wasn't run on the MT task, given this is the first application of seq2seq was born from. Even if not MT, then at least the sentiment analysis tasks (IMDB/Rotten Tomatoes) of the Dai et al., 2015 paper which this paper is so heavily based on for the sequence autoencoder.\n\n=== References ===\nSomething is wrong w/ your references latex setting? Seems like a lot of the conference/journal names are omitted. Additionally, you should update many cites to use the conference/journal name rather than just \"arxiv\".\n\nListen, attend and spell (should be Listen, Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition) -> ICASSP\nif citing ICASSP paper above, should also cite Bahandau paper \"End-to-End Attention-based Large Vocabulary Speech Recognition\" which was published in parallel (also in ICASSP).\n\nAdam: A method for stochastic optimization -> ICLR\nAuto-encoding variational bayes -> ICLR\nAddressing the rare word problem in neural machine translation -> ACL\nPixel recurrent neural networks -> ICML\nA neural conversational model -> ICML Workshop\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Dataset Augmentation in Feature Space
["Terrance DeVries", "Graham W. Taylor"]
Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJ9rLLcxg
https://openreview.net/pdf?id=HJ9rLLcxg
https://openreview.net/forum?id=HJ9rLLcxg&noteId=ByUItzz4g
H18MIfimg
BJC_jUqxe
ICLR.cc/2017/conference/-/paper319/official/review
{"title": "Strong, but some framing issues", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper introduces a sentence encoding model (for use within larger text understanding models) that can extract a matrix-valued sentence representation by way of within-sentence attention. The new model lends itself to (slightly) more informative visualizations than could be gotten otherwise, and beats reasonable baselines on three datasets.\n\nThe paper is reasonably clear, I see no major technical issues, and the proposed model is novel and effective. It could plausibly be relevant to sequence modeling tasks beyond NLP. I recommend acceptance.\n\nThere is one fairly serious writing issue that I'd like to see fixed, though: The abstract, introduction, and related work sections are all heavily skewed towards unsupervised learning. The paper doesn't appear to be doing unsupervised learning, and the ideas are no more nor less suited to unsupervised learning than any other mainstream ideas in the sentence encoding literature.\n\nDetails:\n- You should be clearer about how you expect these embeddings to be used, since that will be of certain interest to anyone attempting to use the results of this work. In particular, how you should convert the matrix representation into a vector for downstream tasks that require one. Some of the content of your reply to my comment could be reasonably added to the paper.\n- A graphical representation of the structure of the model would be helpful.\n- The LSTMN (Cheng et al., EMNLP '16) is similar enough to this work that an explicit comparison would be helpful. Again, incorporating your reply to my comment into the paper would be more than adequate. \n- Jiwei Li et al. (Visualizing and Understanding Neural Models in NLP, NAACL '15) present an alternative way of visualizing the influence of words on sentence encodings without using cross-sentence attention. A brief explicit comparison would be nice here.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING
["Zhouhan Lin", "Minwei Feng", "Cicero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio"]
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.
["Natural language processing", "Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=BJC_jUqxe
https://openreview.net/pdf?id=BJC_jUqxe
https://openreview.net/forum?id=BJC_jUqxe&noteId=H18MIfimg
ByHCv9b4e
BJC_jUqxe
ICLR.cc/2017/conference/-/paper319/official/review
{"title": "Interesting embedding method, lacking in analysis of 2d structure", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a method for representing sentences as a 2d matrix by utilizing a self-attentive mechanism on the hidden states of a bi-directional LSTM encoder. This work differs from prior work mainly in the 2d structure of embedding, which the authors use to produce heat-map visualizations of input sentences and to generate good performance on several downstream tasks.\n\nThere is a substantial amount of prior work which the authors do not appropriately address, some of which is listed in previous comments. The main novelty of this work is in the 2d structure of embeddings, and as such, I would have liked to see this structure investigated in much more depth. Specifically, a couple important relevant experiments would have been:\n\n* How do the performance and visualizations change as the number of attention vectors (r) varies?\n* For a fixed parameter budget, how important is using multiple attention vectors versus, say, using a larger hidden state or embedding size?\n\nI would recommend changing some of the presentation in the penalization term section. Specifically, the statement that \"the best way to evaluate the diversity is definitely the Kullback Leibler divergence between any 2 of the summation weight vectors\" runs somewhat counter to the authors' comments about this topic below.\n\nIn Fig. (2), I did not find the visualizations to provide particularly compelling evidence that the multiple attention vectors were doing much of interest beyond a single attention vector, even with penalization. To me this seems like a necessary component to support the main claims of this paper.\n\nOverall, while I found the architecture interesting, I am not convinced that the model's main innovation -- the 2d structure of the embedding matrix -- is actually doing anything important or meaningful beyond what is being accomplished by similar attentive embedding models already present in the literature. Further experiments demonstrating this effect would be necessary for me to give this paper my full endorsement.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING
["Zhouhan Lin", "Minwei Feng", "Cicero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio"]
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.
["Natural language processing", "Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=BJC_jUqxe
https://openreview.net/pdf?id=BJC_jUqxe
https://openreview.net/forum?id=BJC_jUqxe&noteId=ByHCv9b4e
H1zxxEXEg
BJC_jUqxe
ICLR.cc/2017/conference/-/paper319/official/review
{"title": "Interesting idea but need additional work to be convincing", "rating": "6: Marginally above acceptance threshold", "review": "I like the idea in this paper that use not just one but multiple attentional vectors to extract multiple representations for a sentence. The authors have demonstrated consistent gains across three different tasks Age, Yelp, & SNLI. However, I'd like to see more analysis on the 2D representations (as concerned by another reviewer) to be convinced. Specifically, r=30 seems to be a pretty large value when applying to short sentences like tweets or those in the SNLI dataset. I'd like to see the effect of varying r from small to large value. With large r value, I suspect your models might have an advantage in having a much larger number of parameters (specifically in the supervised components) compare to other models. To make it transparent, the model sizes should be reported. I'd also like to see performances on the dev sets or learning curves.\n\nIn the conclusion, the authors remark that \"attention mechanism reliefs the burden of LSTM\". If the 2D representations are effective in that aspect, I'd expect that the authors might be able to train with a smaller LSTM. Testing the effect of LSTM dimension vs $r$ will be helpful.\n\nLastly, there is a problem in the presentation of the paper in which there is no training objective defined. Readers have to read until the experimental sections to guess that the authors perform supervised learning and back-prop through the self-attention mechanism as well as the LSTM.\n\n* Minor comments:\nTypos: netowkrs, toghter, performd\nMissing year for the citation of (Margarit & Subramaniam)\nIn figure 3, attention plotswith and without penalization look similar.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING
["Zhouhan Lin", "Minwei Feng", "Cicero Nogueira dos Santos", "Mo Yu", "Bing Xiang", "Bowen Zhou", "Yoshua Bengio"]
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.
["Natural language processing", "Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=BJC_jUqxe
https://openreview.net/pdf?id=BJC_jUqxe
https://openreview.net/forum?id=BJC_jUqxe&noteId=H1zxxEXEg
B1v-2iWNx
SJ8BZTjeg
ICLR.cc/2017/conference/-/paper587/official/review
{"title": "Review", "rating": "3: Clear rejection", "review": "The paper proposes an approach to unsupervised learning based on generative adversarial networks (GANs) and clustering. The general topic of unsupervised learning is important, and the proposed approach makes some sense, but experimental evaluation is very weak and does not allow to judge if the proposed method is competitive with existing alternatives. Therefore the paper cannot be published in its current form. \n\nMore detailed remarks (many of these are copies of my pre-review questions the authors have not responded to):\n\n1) Realted work overview looks incomplete. There has been work on combining clustering with deep learning, for example [1] or [2] look very related. A long list of potentially related papers can be found here: https://amundtveit.com/2016/12/02/deep-learning-for-clustering/ . From the GAN side, for example [3] looks related. I would like the authors to comment on relation of their approach to existing work, if possible compare with existing approaches, and if not possible - explain why.\n\n[1] Xie et al., \"Unsupervised Deep Embedding for Clustering Analysis\", ICML 2016 http://jmlr.org/proceedings/papers/v48/xieb16.pdf\n[2] Yang et al., \"Joint Unsupervised Learning of Deep Representations and Image Clusters\", CVPR 2016 http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Yang_Joint_Unsupervised_Learning_CVPR_2016_paper.pdf\n[3] J.T. Springenberg, \"Unsupervised and semi-supervised learning with categorical generative adversarial networks\", ICLR 2016, https://arxiv.org/pdf/1511.06390v2.pdf\n\n2) The authors do not report classification accuracies, which makes it very difficult to compare their results with existing work. Classification accuracies should be reported. They may not be a perfect measure of feature quality, but reporting them in addition to ARI and NMI would not hurt.\n\n3) The authors have not compared their approach to existing unsupervised feature learning approaches, for example feature learning with k-means (Coates and Ng 2011), sparse coding methods such as Hierarchical Matching Pursuit (Bo et al., 2012 and 2013), Exemplar-CNN (Dosovitskiy et al. 2014)\n\n4) Looks like in Figure 2 every \"class\" consists essentially of a single image and its slight variations? Doesn't this mean GAN training failed? Do all your GANs produce samples of this quality? \n\n5) Why do you not show results with visual features on STL-10?\n\n6) Supervisedly learned filters in Figure 3 looks unusual to me, they are normally not that smooth. Have you optimized the hyperparameters? What is the resulting accuracy?\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
https://openreview.net/forum?id=SJ8BZTjeg
https://openreview.net/pdf?id=SJ8BZTjeg
https://openreview.net/forum?id=SJ8BZTjeg&noteId=B1v-2iWNx
SynBgHuNx
SJ8BZTjeg
ICLR.cc/2017/conference/-/paper587/official/review
{"title": "review", "rating": "3: Clear rejection", "review": "The papers investigates the task of unsupervised learning with deep features via k-means clustering. The entire pipeline can be decomposed into two steps: (1) unsupervised feature learning based on GAN framework and (2) k-means clustering using learned deep network features. Following the GAN framework and its extension InfoGAN, the first step is to train a pair of discriminator network and generator network from scratch using min-max objective. Then, it applies k-means clustering on the top layer features from discriminator network. For evaluation, the proposed unsupervised feature learning approach is compared against traditional hand-crafted features such as HOG and supervised method on three benchmark datasets. Normalized Mutual Information (NMI) and Adjusted RAND Index (ARI) have been used as the evaluation metrics for experimental comparison. Although the proposed method may be potentially useful in practice (if refined further), I find the method lacks novelty, and the experimental results are not significant enough.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
https://openreview.net/forum?id=SJ8BZTjeg
https://openreview.net/pdf?id=SJ8BZTjeg
https://openreview.net/forum?id=SJ8BZTjeg&noteId=SynBgHuNx
BkUsyJGEl
SJ8BZTjeg
ICLR.cc/2017/conference/-/paper587/official/review
{"title": "review", "rating": "3: Clear rejection", "review": "This paper proposed an unsupervised learning method based on running kmeans on the features learned by a discriminator network in a generative adversarial network setup. Unsupervised learning methods with GANs is certainly a relevant topic but this paper does not propose anything particularly novel as far as I can tell. More importantly, the evaluation methods in this paper are extremely lacking. The authors omit classification results on CIFAR and STL-10 and instead the only quantitative evaluation plot the performance of the clustering algorithm on the features. Not only are classification results not shown, no comparisons are made to the wealth of related work. I list just a few highly related techniques below. Finally, it appear the authors have not train their GANs correctly as the samples in Fig.2 appear to be from a model that has collapsed during training. In summary, the ideas in this paper are potentially interesting but this paper should not be accepted in its current form due to lack of experimental results and comparisons. \n\n(non-exhaustive) list of related work on unsupervised learning (with and without GANs):\n[1] Springenberg. Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks, ICLR 2016 (https://arxiv.org/abs/1511.06390)\n[2] Salimans et al. Improved Techniques for Training GANs. NIPS 2016 (https://arxiv.org/abs/1606.03498)\n[3] Dosovitskiy et al. Discriminative unsupervised feature learning with convolutional neural networks, NIPS 2014 (https://arxiv.org/abs/1406.6909)\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Unsupervised Learning Using Generative Adversarial Training And Clustering
["Vittal Premachandran", "Alan L. Yuille"]
In this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional clustering algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of learning using a generative model as an adversary. We also show that adversarial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more traditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approach performs similarly to supervised learning approaches, and, might even be better in situations with small amounts of labeled training data and large amounts of unlabeled data.
["generative adversarial training", "unsupervised", "clustering", "adversarial training", "unsupervised learning", "use", "components", "traditional clustering algorithm", "feature extractor"]
https://openreview.net/forum?id=SJ8BZTjeg
https://openreview.net/pdf?id=SJ8BZTjeg
https://openreview.net/forum?id=SJ8BZTjeg&noteId=BkUsyJGEl
rkNheaUEl
ryxB0Rtxx
ICLR.cc/2017/conference/-/paper156/official/review
{"title": "a good paper", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper provides some theoretical guarantees for the identity parameterization by showing that 1) arbitrarily deep linear residual networks have no spurious local optima; and 2) residual networks with ReLu activations have universal finite-sample expressivity. This paper is well written and studied a fundamental problem in deep neural network. I am very positive on this paper overall and feel that this result is quite significant by essentially showing the stability of auto-encoder, given the fact that it is hard to provide concrete theoretical guarantees for deep neural networks.\n\nOne of key questions is how to extent the result in this paper to the more general nonlinear actuation function case. \n\nMinors: one line before Eq. (3.1), U \\in R ? \\times k\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Identity Matters in Deep Learning
["Moritz Hardt", "Tengyu Ma"]
An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.
["batch normalization", "residual networks", "networks", "relu activations", "model", "identity matters", "deep", "design principle", "deep learning"]
https://openreview.net/forum?id=ryxB0Rtxx
https://openreview.net/pdf?id=ryxB0Rtxx
https://openreview.net/forum?id=ryxB0Rtxx&noteId=rkNheaUEl
SJb64ilNl
ryxB0Rtxx
ICLR.cc/2017/conference/-/paper156/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This paper investigates the identity parametrization also known as shortcuts where the output of each layer has the form h(x)+x instead of h(x). This has been shown to perform well in practice (eg. ResNet). The discussions and experiments in the paper are interesting. Here's a few comments on the paper:\n\n-Section 2: Studying the linear networks is interesting by itself. However, it is not clear that how this could translate to any insight about non-linear networks. For example, you have proved that every critical point is global minimum. I think it is helpful to add some discussion about the relationship between linear and non-linear networks.\n\n-Section 3: The construction is interesting but the expressive power of residual network is within a constant factor of general feedforward networks and I don't see why we need a different proof given all the results on finite sample expressivity of feedforward networks. I appreciate if you clarify this.\n\n-Section 4: I like the experiments. The choice of random projection on the top layer is brilliant. However, since you have combined this choice with all-convolutional residual networks, it is hard for the reader to separate the affect of each of them. Therefore, I suggest reporting the numbers for all-convolutional residual networks with learned top layer and also ResNet with random projection on the top layer.\n\nMinor comments:\n\n1- I don't agree that Batch Normalization can be reduced to identity transformation and I don't know if bringing that in the abstract without proper discussion is a good idea.\n\n2- Page 5 above assumption 3.1 : x^(i)=1 ==> ||x^(i)||_2=1\n\n ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Identity Matters in Deep Learning
["Moritz Hardt", "Tengyu Ma"]
An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.
["batch normalization", "residual networks", "networks", "relu activations", "model", "identity matters", "deep", "design principle", "deep learning"]
https://openreview.net/forum?id=ryxB0Rtxx
https://openreview.net/pdf?id=ryxB0Rtxx
https://openreview.net/forum?id=ryxB0Rtxx&noteId=SJb64ilNl
Hy8H45WVg
ryxB0Rtxx
ICLR.cc/2017/conference/-/paper156/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "Paper Summary:\n\nAuthors investigate identity re-parametrization in the linear and the non linear case. \n\nDetailed comments:\n\n\u2014 Linear Residual Network:\n\nThe paper shows that for a linear residual network any critical point is a global optimum. This problem is non convex it is interesting that this simple re-parametrization leads to such a result. \n\n \u2014 Non linear Residual Network:\n\nAuthors propose a construction that maps the points to their labels via a resnet , using an initial random projection, followed by a residual block that clusters the data based on their label, and a last layer that maps the clusters to the label. \n\n1- In Eq 3.4 seems the dimensions are not matching q_j in R^k and e_j in R^r. please clarify \n\n2- The construction seems fine, but what is special about the resnet here in this construction? One can do a similar construction if we did not have the identity? can you discuss this point?\nIn the linear case it is clear from a spectral point of view how the identity is helping the optimization. Please provide some intuition. \n\n3- Existence of a network in the residual class that overfits does it give us any intuition on why residual network outperform other architectures? What does an existence result of such a network tell us about its representation power ? \nA simple linear model under the assumption that points can not be too close can overfit the data, and get fast convergence rate (see for instance tsybakov noise condition).\n\n4- What does the construction tell us about the number of layers? \n\n5- clustering the activation independently from the label, is an old way to pretrain the network. One could use those centroids as weights for the next layer (this is also related to Nystrom approximation see for instance https://www.cse.ust.hk/~twinsen/nystrom.pdf ). Your clustering is very strongly connected to the label at each residual block.\nI don't think this is appealing or useful since no feature extraction is happening. Moreover the number of layers in this construction\ndoes not matter. Can you weaken the clustering to be independent to the label at least in the early layers? then one could you use your construction as an initialization in the training. \n\n\u2014 Experiments : \n\n- last layer is not trained means the layer before the linear layer preceding the softmax?\n\nMinor comments:\n\nAbstract: how the identity mapping motivated batch normalization?\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Identity Matters in Deep Learning
["Moritz Hardt", "Tengyu Ma"]
An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.
["batch normalization", "residual networks", "networks", "relu activations", "model", "identity matters", "deep", "design principle", "deep learning"]
https://openreview.net/forum?id=ryxB0Rtxx
https://openreview.net/pdf?id=ryxB0Rtxx
https://openreview.net/forum?id=ryxB0Rtxx&noteId=Hy8H45WVg
r10X7Es4g
B1ckMDqlg
ICLR.cc/2017/conference/-/paper364/official/review
{"title": "Elegant use of MoE for expanding model capacity, but it would be very nice to discuss MoE alternatives in terms of computational efficiency and other factors.", "rating": "6: Marginally above acceptance threshold", "review": "Paper Strengths: \n-- Elegant use of MoE for expanding model capacity and enabling training large models necessary for exploiting very large datasets in a computationally feasible manner\n\n-- The effective batch size for training the MoE drastically increased also\n\n-- Interesting experimental results on the effects of increasing the number of MoEs, which is expected.\n\n\nPaper Weaknesses:\n\n--- there are many different ways of increasing model capacity to enable the exploitation of very large datasets; it would be very nice to discuss the use of MoE and other alternatives in terms of computational efficiency and other factors.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
["Noam Shazeer", "*Azalia Mirhoseini", "*Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean"]
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
["Deep learning"]
https://openreview.net/forum?id=B1ckMDqlg
https://openreview.net/pdf?id=B1ckMDqlg
https://openreview.net/forum?id=B1ckMDqlg&noteId=r10X7Es4g
B1ZFEvR4x
B1ckMDqlg
ICLR.cc/2017/conference/-/paper364/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper proposes a method for significantly increasing the number of parameters in a single layer while keeping computation in par with (or even less than) current SOTA models. The idea is based on using a large mixture of experts (MoE) (i.e. small networks), where only a few of them are adaptively activated via a gating network. While the idea seems intuitive, the main novelty in the paper is in designing the gating network which is encouraged to achieve two objectives: utilizing all available experts (aka importance), and distributing computation fairly across them (aka load). \nAdditionally, the paper introduces two techniques for increasing the batch-size passed to each expert, and hence maximizing parallelization in GPUs.\nExperiments applying the proposed approach on RNNs in language modelling task show that it can beat SOTA results with significantly less computation, which is a result of selectively using much more parameters. Results on machine translation show that a model with more than 30x number of parameters can beat SOTA while incurring half of the effective computation.\n\nI have the several comments on the paper:\n- I believe that the authors can do a better job in their presentation. The paper currently is at 11 pages (which is too long in my opinion), but I find that Section 3.2 (the crux of the paper) needs better motivation and intuitive explanation. For example, equation 8 deserves more description than currently devoted to it. Additional space can be easily regained by moving details in the experiments section (e.g. architecture and training details) to the appendix for the curious readers. Experiment section can be better organized by finishing on experiment completely before moving to the other one. There are also some glitches in the writing, e.g. the end of Section 3.1. \n- The paper is missing some important references in conditional computation (e.g. https://arxiv.org/pdf/1308.3432.pdf) which deal with very similar issues in deep learning.\n- One very important lesson from the conditional computation literature is that while we can in theory incur much less computation, in practice (especially with the current GPU architectures) the actual time does not match the theory. This can be due to inefficient branching in GPUs. It would be nice if the paper includes a discussion of how their model (and perhaps implementation) deal with this problem, and why it scales well in practice.\n- Table 1 and Table 3 contain repetitive information, and I think they should be combined in one (maybe moving Table 3 to appendix). One thing I do not understand is how does the number of ops/timestep relate to the training time. This also related to the pervious comment.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
["Noam Shazeer", "*Azalia Mirhoseini", "*Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean"]
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
["Deep learning"]
https://openreview.net/forum?id=B1ckMDqlg
https://openreview.net/pdf?id=B1ckMDqlg
https://openreview.net/forum?id=B1ckMDqlg&noteId=B1ZFEvR4x
Hy7i5WXNg
B1ckMDqlg
ICLR.cc/2017/conference/-/paper364/official/review
{"title": "Nice use of MoE with good results", "rating": "7: Good paper, accept", "review": "This paper describes a method for greatly expanding network model size (in terms of number of stored parameters) in the context of a recurrent net, by applying a Mixture of Experts between recurrent net layers that is shared between all time steps. By process features from all timesteps at the same time, the effective batch size to the MoE is increased by a factor of the number of steps in the model; thus even for sparsely assigned experts, each expert can be used on a large enough sub-batch of inputs to remain computationally efficient. Another second technique that redistributes elements within a distributed model is also described, further increasing per-expert batch sizes.\n\nExperiments are performed on language modeling and machine translation tasks, showing significant gains by increasing the number of experts, compared to both SoA as well as explicitly computationally-matched baseline systems.\n\nAn area that falls a bit short is in presenting plots or statistics on the real computational load and system behavior. While two loss terms were employed to balance the use of experts, these are not explored in the experiments section. It would have been nice to see the effects of these more, along with the effects of increasing effective batch sizes, e.g. measurements of the losses over the course of training, compared to the counts/histogram distributions of per-expert batch sizes.\n\nOverall I think this is a well-described system that achieves good results, using a nifty placement for the MoE that can overcome what otherwise might be a disadvantage for sparse computation.\n\n\n\nSmall comment:\nI like Fig 3, but it's not entirely clear whether datapoints coincide between left and right plots. The H-H line has 3 points on left but 5 on the right? Also would be nice if the colors matched between corresponding lines.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
["Noam Shazeer", "*Azalia Mirhoseini", "*Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean"]
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
["Deep learning"]
https://openreview.net/forum?id=B1ckMDqlg
https://openreview.net/pdf?id=B1ckMDqlg
https://openreview.net/forum?id=B1ckMDqlg&noteId=Hy7i5WXNg
ByGtCUlVl
B1hdzd5lg
ICLR.cc/2017/conference/-/paper453/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "SUMMARY.\n\nThe paper proposes a gating mechanism to combine word embeddings with character-level word representations.\nThe gating mechanism uses features associated to a word to decided which word representation is the most useful.\nThe fine-grain gating is applied as part of systems which seek to solve the task of cloze-style reading comprehension question answering, and Twitter hashtag prediction.\nFor the question answering task, a fine-grained reformulation of gated attention for combining document words and questions is proposed.\nIn both tasks the fine-grain gating helps to get better accuracy, outperforming state-of-the-art methods on the CBT dataset and performing on-par with state-of-the-art approach on the SQuAD dataset.\n\n\n----------\n\nOVERALL JUDGMENT\n\nThis paper proposes a clever fine-grained extension of a scalar gate for combining word representation.\nIt is clear and well written. It covers all the necessary prior work and compares the proposed method with previous similar models.\n\nI liked the ablation study that shows quite clearly the impact of individual contributions.\nAnd I also liked the fact that some (shallow) linguistic prior knowledge e.g., pos tags ner tags, frequency etc. has been used in a clever way. \nIt would be interesting to see if syntactic features can be helpful.\n\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Words or Characters? Fine-grained Gating for Reading Comprehension
["Zhilin Yang", "Bhuwan Dhingra", "Ye Yuan", "Junjie Hu", "William W. Cohen", "Ruslan Salakhutdinov"]
Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children's Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=B1hdzd5lg
https://openreview.net/pdf?id=B1hdzd5lg
https://openreview.net/forum?id=B1hdzd5lg&noteId=ByGtCUlVl
HJ-dvmfEg
B1hdzd5lg
ICLR.cc/2017/conference/-/paper453/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "I think the problem here is well motivated, the approach is insightful and intuitive, and the results are convincing of the approach (although lacking in variety of applications). I like the fact that the authors use POS and NER in terms of an intermediate signal for the decision. Also they compare against a sufficient range of baselines to show the effectiveness of the proposed model.\n\nI am also convinced by the authors' answers to my question, I think there is sufficient evidence provided in the results to show the effectiveness of the inductive bias introduced by the fine-grained gating model.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Words or Characters? Fine-grained Gating for Reading Comprehension
["Zhilin Yang", "Bhuwan Dhingra", "Ye Yuan", "Junjie Hu", "William W. Cohen", "Ruslan Salakhutdinov"]
Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children's Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=B1hdzd5lg
https://openreview.net/pdf?id=B1hdzd5lg
https://openreview.net/forum?id=B1hdzd5lg&noteId=HJ-dvmfEg
SJbtVHfVg
B1hdzd5lg
ICLR.cc/2017/conference/-/paper453/official/review
{"title": "review", "rating": "7: Good paper, accept", "review": "This paper proposes a new gating mechanism to combine word and character representations. The proposed model sets a new state-of-the-art on the CBT dataset; the new gating mechanism also improves over scalar gates without linguistic features on SQuAD and a twitter classification task. \n\nIntuitively, the vector-based gate working better than the scalar gate is unsurprising, as it is more similar to LSTM and GRU gates. The real contribution of the paper for me is that using features such as POS tags and NER help learn better gates. The visualization in Figure 3 and examples in Table 4 effectively confirm the utility of these features, very nice! \n\nIn sum, while the proposed gate is nothing technically groundbreaking, the paper presents a very focused contribution that I think will be useful to the NLP community. Thus, I hope it is accepted.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Words or Characters? Fine-grained Gating for Reading Comprehension
["Zhilin Yang", "Bhuwan Dhingra", "Ye Yuan", "Junjie Hu", "William W. Cohen", "Ruslan Salakhutdinov"]
Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children's Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=B1hdzd5lg
https://openreview.net/pdf?id=B1hdzd5lg
https://openreview.net/forum?id=B1hdzd5lg&noteId=SJbtVHfVg
HkSTCzENe
ryUPiRvge
ICLR.cc/2017/conference/-/paper50/official/review
{"title": "Important task but marginal contribution", "rating": "3: Clear rejection", "review": "The authors attempt to extract analytical equations governing physical systems from observations - an important task. Being able to capture succinct and interpretable rules which a physical system follows is of great importance. However, the authors do this with simple and naive tools which will not scale to complex tasks, offering no new insights or advances to the field. \n\nThe contribution of the paper (and the first four pages of the submission!) can be summarised in one sentence: \n\"Learn the weights of a small network with cosine, sinusoid, and input elements products activation functions s.t. the weights are sparse (L1)\".\nThe learnt network weights with its fixed structure are then presented as the learnt equation. \n\nThis research uses tools from literature from the '90s (I haven't seen the abbreviation ANN (page 3) for a long time) and does not build on modern techniques which have advanced a lot since then. I would encourage the authors to review modern literature and continue working on this important task.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Extrapolation and learning equations
["Georg Martius", "Christoph H. Lampert"]
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified.
["Supervised Learning", "Deep learning", "Structured prediction"]
https://openreview.net/forum?id=ryUPiRvge
https://openreview.net/pdf?id=ryUPiRvge
https://openreview.net/forum?id=ryUPiRvge&noteId=HkSTCzENe
S1MchhtNe
ryUPiRvge
ICLR.cc/2017/conference/-/paper50/official/review
{"title": "Learning physical phenomenon", "rating": "7: Good paper, accept", "review": "Thank you for an interesting perspective on the neural approaches to approximate physical phenomenon. This paper describes a method to extrapolate a given dataset and predict formulae with naturally occurring functions like sine, cosine, multiplication etc. \n \nPros \n- The approach is rather simple and hence can be applied to existing methods. The major difference is incorporating functions with 2 or more inputs which was done successfully in the paper. \n \n- It seems that MLP, even though it is good for interpolation, it fails to extrapolate data to model the correct function. It was a great idea to use basis functions like sine, cosine to make the approach more explicit. \n \nCons \n- Page 8, the claim that x2 cos(ax1 + b) ~ 1.21(cos(-ax1 + \u03c0 + b + 0.41x2) + sin(ax1 + b + 0.41x2)) for y in [-2,2] is not entirely correct. There should be some restrictions on 'a' and 'b' as well as the approximate equality doesn't hold for all real values of 'a' and 'b'. Although, for a=2*pi and b=pi/4, the claim is correct so the model is predicting a correct solution within certain limits. \n \n- Most of the experiments involve up to 4 variables. It would be interesting to see how the neural approach models hundreds of variables. \n \n- Another way of looking at the model is that the non-linearities like sine, cosine, multiplication act as basis functions. If the data is a linear combination of such functions, the model will be able to learn the weights. As division is not one of the non-linearities, predicting expressions in Equation 13 seems unlikely. Hence, I was wondering, is it possible to make sure that this architecture is a universal approximator. \n \nSuggested Edits \n- Page 8, It seems that there is a typographical error in the expression 1.21(cos(ax1 + \u03c0 + b + 0.41x2) + sin(ax1 + b + 0.41x2)). When compared with the predicted formula in Figure 4(b), it should be 1.21(cos(-ax1 + \u03c0 + b + 0.41x2) + sin(ax1 + b + 0.41x2)). ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Extrapolation and learning equations
["Georg Martius", "Christoph H. Lampert"]
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified.
["Supervised Learning", "Deep learning", "Structured prediction"]
https://openreview.net/forum?id=ryUPiRvge
https://openreview.net/pdf?id=ryUPiRvge
https://openreview.net/forum?id=ryUPiRvge&noteId=S1MchhtNe
BJAY0Y07g
ryUPiRvge
ICLR.cc/2017/conference/-/paper50/official/review
{"title": "An interesting paper for domain adapatation with NO target domain data", "rating": "6: Marginally above acceptance threshold", "review": "Thank you for an interesting read. \n\nTo my knowledge, very few papers have looked at transfer learning with **no** target domain data (the authors called this task as \"extrapolation\"). This paper clearly shows that the knowledge of the underlying system dynamics is crucial in this case. The experiments clearly showed the promising potential of the proposed EQL model. I think EQL is very interesting also from the perspective of interpretability, which is crucial for data analysis in scientific domains.\n\nQuesions and comments:\n\n1. Multiplication units. By the universal approximation theorem, multiplication can also be represented by a neural network in the usual sense. I agree with the authors' explanation of interpolation and extrapolation, but I still don't quite understand why multiplication unit is crucial here. I guess is it because this representation generalises better when training data is not that representative for the future?\n\n2. Fitting an EQL vs. fitting a polynomial. It seems to me that the number of layers in EQL has some connections to the degree of the polynomial. Assume we know the underlying dynamics we want to learn can be represented by a polynomial. Then what's the difference between fitting a polynomial (with model selection techniques to determine the degree) and fitting an EQL (with model selection techniques to determine the number of layers)? Also your experiments showed that the selection of basis functions (specific to the underlying dynamics you want to learn) is crucial for the performance. This means you need to have some prior knowledge on the form of the equation anyway!\n\n3. Ben-David et al. 2010 has presented some error bounds for the hypothesis that is trained on source data but tested on the target data. I wonder if your EQL model can achieve better error bounds?\n\n4. Can you comment on the comparison of your method to those who modelled the extrapolation data with **uncertainty**?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Extrapolation and learning equations
["Georg Martius", "Christoph H. Lampert"]
In classical machine learning, regression is treated as a black box process of identifying a suitable function from a hypothesis set without attempting to gain insight into the mechanism connecting inputs and outputs. In the natural sciences, however, finding an interpretable function for a phenomenon is the prime goal as it allows to understand and generalize results. This paper proposes a novel type of function learning network, called equation learner (EQL), that can learn analytical expressions and is able to extrapolate to unseen domains. It is implemented as an end-to-end differentiable feed-forward network and allows for efficient gradient based training. Due to sparsity regularization concise interpretable expressions can be obtained. Often the true underlying source expression is identified.
["Supervised Learning", "Deep learning", "Structured prediction"]
https://openreview.net/forum?id=ryUPiRvge
https://openreview.net/pdf?id=ryUPiRvge
https://openreview.net/forum?id=ryUPiRvge&noteId=BJAY0Y07g
SyoOxS0Xl
SJCscQcge
ICLR.cc/2017/conference/-/paper198/official/review
{"title": "Blackbox adversarial examples", "rating": "4: Ok but not good enough - rejection", "review": "The authors propose a method to generate adversarial examples w/o relying on knowledge of the network architecture or network gradients.\n\nThe idea has some merit, however, as mentioned by one of the reviewers, the field has been studied widely, including black box setups.\n\nMy main concern is that the first set of experiments allows images that are not in image space. The authors acknowledge this fact on page 7 in the first paragraph. In my opinion, this renders these experiments completely meaningless. At the very least, the outcome is not surprising to me at all.\n\nThe greedy search procedure remedies this issue. The description of the proposed method is somewhat convoluted. AFAICT, first a candidate set of pixels is generated by using PERT. Then the pixels are perturbed using CYCLIC.\nIt is not clear why this approach results in good/minimal perturbations as the candidate pixels are found using a large \"p\" that can result in images outside the image space. The choice of this method does not seem to be motivated by the authors.\n\nIn conclusion, while the authors to an interesting investigation and propose a method to generate adversarial images from a black-box network, the overall approach and conclusions seem relatively straight forward. The paper is verbosely written and I feel like the findings could be summarized much more succinctly.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Simple Black-Box Adversarial Perturbations for Deep Networks
["Nina Narodytska", "Shiva Kasiviswanathan"]
Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network's vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test while designing robust networks.
["Computer vision", "Deep learning"]
https://openreview.net/forum?id=SJCscQcge
https://openreview.net/pdf?id=SJCscQcge
https://openreview.net/forum?id=SJCscQcge&noteId=SyoOxS0Xl
S1RZxo-Hg
SJCscQcge
ICLR.cc/2017/conference/-/paper198/official/review
{"title": "Too verbose for little insight", "rating": "4: Ok but not good enough - rejection", "review": "\n\nPaper summary:\nThis work proposes a new algorithm to generate k-adversarial images by modifying a small fraction of the image pixels and without requiring access to the classification network weight.\n\n\nReview summary:\nThe topic of adversarial images generation is of both practical and theoretical interest. This work proposes a new approach to the problem, however the paper suffers from multiple issues. It is too verbose (spending long time on experiments of limited interest); disorganized (detailed description of the main algorithm in sections 4 and 5, yet a key piece is added in the experimental section 6); and more importantly the resulting experiments are of limited interest to the reader, and the main conclusions are left unclear.\nThis looks like an interesting line of work that has yet to materialize in a good document, it would need significant re-writing to be in good shape for ICLR.\n\n\nPros:\n* Interesting topic\n* Black-box setup is most relevant\n* Multiple experiments\n* Shows that with flipping only 1~5% of pixels, adversarial images can be created\n\n\nCons:\n* Too long, yet key details are not well addressed\n* Some of the experiments are of little interest\n* Main experiments lack key measures or additional baselines\n* Limited technical novelty\n\n\n\n\nQuality: the method description and experimental setup leave to be desired. \n\n\nClarity: the text is verbose, somewhat formal, and mostly clear; but could be improved by being more concise.\n\n\nOriginality: I am not aware of another work doing this exact same type of experiments. However the approach and results are not very surprising.\n\n\nSignificance: the work is incremental, the issues in the experiments limit potential impact of this paper.\n\n\nSpecific comments:\n* I would suggest to start by making the paper 30%~40% shorter. Reducing the text length, will force to make the argumentation and descriptions more direct, and select only the important experiments.\n* Section 4 seems flawed. If the modified single pixel can have values far outside of the [LB, UB] range; then this test sample is clearly outside of the training distribution; and thus it is not surprising that the classifier misbehaves (this would be true for most classifiers, e.g. decision forests or non-linear SVMs). These results would be interesting only if the modified pixel is clamped to the range [LB, UB].\n* [LB, UB] is never specified, is it ? How does p = 100, compares to [LB, UB] ? To be of any use, p should be reported in proportion to [LB, UB]\n* The modification is done after normalization, is this realistic ? \n* Alg 2, why not clamping to [LB, UB] ?\n* Section 6, \u201cimplementing algorithm LocSearchAdv\u201d, the text is unclear on how p is adjusted; new variables are added. This is confusion.\n* Section 6, what happens if p is _not_ adjusted ? What happens if a simple greedy random search is used (e.g. try 100 times a set of 5 random pixels with value 255) ?\n* Section 6, PTB is computed over all pixels ? including the ones not modified ? why is that ? Thus LocSearchAdv PTB value is not directly comparable to FGSM, since it intermingles with #PTBPixels (e.g. \u201cin many cases far less average perturbation\u201d claim).\n* Section 6, there is no discussion on the average number of model evaluations. This would be equivalent to the number of requests made to a system that one would try to fool. This number is important to claim the \u201ceffectiveness\u201d of such black box attacks. Right now the text only mentions the upper bound of 750 network evaluations. \n* How does the number of network evaluations changes when adjusting or not adjusting p during the optimization ?\n* Top-k is claimed as a main point of the paper, yet only one experiment is provided. Please develop more, or tune-down the claims.\n* Why is FGSM not effective for batch normalized networks ? Has this been reported before ? Are there other already published techniques that are effective for this scenario ? Comparing to more methods would be interesting.\n* If there is little to note from section 4 results, what should be concluded from section 6 ? That is possible to obtain good results by modifying only few pixels ? What about selecting the \u201ctop N\u201d largest modified pixels from FGSM ? Would these be enough ? Please develop more the baselines, and the specific conclusions of interest.\n\n\nMinor comments:\n* The is an abuse of footnotes, most of them should be inserted in the main text.\n* I would suggest to repeat twice or thrice the meaning of the main variables used (e.g. p, r, LB, UB)\n* Table 1,2,3 should be figures\n* Last line of first paragraph of section 6 is uninformative.\n* Very tiny -> small", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Simple Black-Box Adversarial Perturbations for Deep Networks
["Nina Narodytska", "Shiva Kasiviswanathan"]
Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network's vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test while designing robust networks.
["Computer vision", "Deep learning"]
https://openreview.net/forum?id=SJCscQcge
https://openreview.net/pdf?id=SJCscQcge
https://openreview.net/forum?id=SJCscQcge&noteId=S1RZxo-Hg
S1_Yb5N4g
SJCscQcge
ICLR.cc/2017/conference/-/paper198/official/review
{"title": "review: incremental", "rating": "4: Ok but not good enough - rejection", "review": "The paper presents a method for generating adversarial input images for a convolutional neural network given only black box access (ability to obtain outputs for chosen inputs, but no access to the network parameters). However, the notion of adversarial example is somewhat weakened in this setting: it is k-misclassification (ensuring the true label is not a top-k output), instead of misclassification to any desired target label.\n\nA similar black-box setting is examined in Papernot et al. (2016c). There, black-box access is used to train a substitute for the network, which is then attacked. Here, black-box access in instead exploited via local search. The input is perturbed, the resulting change in output scores is examined, and perturbations that push the scores towards k-misclassification are kept.\n\nA major concern with regard to novelty is that this greedy local search procedure is analogous to gradient descent; a numeric approximation (observe change in output for corresponding change in input) is used instead of backpropagation, since one does not have access to the network parameters. As such, the greedy local search algorithm itself, to which the paper devotes a large amount of discussion, is not surprising and the paper is fairly incremental in terms of technical novelty.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Simple Black-Box Adversarial Perturbations for Deep Networks
["Nina Narodytska", "Shiva Kasiviswanathan"]
Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network's vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test while designing robust networks.
["Computer vision", "Deep learning"]
https://openreview.net/forum?id=SJCscQcge
https://openreview.net/pdf?id=SJCscQcge
https://openreview.net/forum?id=SJCscQcge&noteId=S1_Yb5N4g
rktOx2WNl
SyJNmVqgg
ICLR.cc/2017/conference/-/paper212/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "This work proposes to augment normal gradient descent algorithms with a \"Data Filter\", that acts as a curriculum teacher by selecting which examples the trained target network should see to learn optimally. Such a filter is learned simultaneously to the target network, and trained via Reinforcement Learning algorithms receiving rewards based on the state of training with respect to some pseudo-validation set.\n\n\nStylistic comment, please use the more common style of \"(Author, year)\" rather than \"Author (year)\" when the Author is *not* referred to or used in the sentence.\nE.g. \"and its variants such as Adagrad Duchi et al. (2011)\" should be \"such as Adagrad (Duchi et al., 2011)\", and \"proposed in Andrychowicz et al. (2016),\" should remain so.\n\nI think the paragraph containing \"What we need to do is, after seeing the mini-batch Dt of M training instances, we dynamically determine which instances in Dt are used for training and which are filtered.\" should be clarified. What is \"seeing\"? That is, you should mention explicitly that you do the forward-pass first, then compute features from that, and then decide for which examples to perform the backwards pass.\n\n\nThere are a few choices in this work which I do not understand:\n\nWhy wait until the end of the episode to update your reinforce policy (algorithm 2), but train your actor critic at each step (algorithm 3)? You say REINFORCE has high variance, which is true, but does not mean it cannot be trained at each step (unless you have some experiments that suggest otherwise, and if so they should be included or mentionned in the paper).\n\nSimilarly, why not train REINFORCE with the same reward as your Actor-Critic model? And vice-versa? You claim several times that a limitation of REINFORCE is that you need to wait for the episode to be over, but considering your data is i.i.d., you can make your episode be anything from a single training step, one D_t, to the whole multi-epoch training procedure.\n\n\nI have a few qualms with the experimental setting:\n- is Figure 2 obtained from a single (i.e. one per setup) experiment? From different initial weights? If so, there is no proper way of knowing whether results are chance or not! This is a serious concern for me.\n- with most state-of-the-art work using optimization methods such as Adam and RMSProp, is it surprising that they were not experimented with.\n- it is not clear what the learning rates are; how fast should the RL part adapt to the SL part? Its not clear that this was experimented with at all.\n- the environment, i.e. the target network being trained, is not stationnary at all. It would have been interesting to measure how much the policy changes as a function of time. Figure 3, could both be the result of the policy adapting, or of the policy remaining fixed and the features changing (which could indicate a failure of the policy to adapt).\n- in fact it is not really adressed in the paper that the environment is non-stationary, given the current setup, the distribution of features will change as the target network progresses. This has an impact on optimization.\n- how is the \"pseudo-validation\" data, target to the policy, chosen? It should be a subset of the training data. The second paragraph of section 3.2 suggests something of the sort, but then your algorithms suggest that the same data is used to train both the policies and the networks, so I am unsure of which is what.\n\n\nOverall the idea is novel and interesting, the paper is well written for the most part, but the methodology has some flaws. Clearer explanations and either more justification of the experimental choices or more experiments are needed to make this paper complete. Unless the authors convince me otherwise, I think it would be worth waiting for more experiments and submitting a very strong paper rather than presenting this (potentially powerful!) idea with weak results.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Data Filter for Bootstrapping Stochastic Gradient Descent
["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"]
Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.
["Reinforcement Learning", "Deep learning", "Optimization"]
https://openreview.net/forum?id=SyJNmVqgg
https://openreview.net/pdf?id=SyJNmVqgg
https://openreview.net/forum?id=SyJNmVqgg&noteId=rktOx2WNl
HyoMSTSVl
SyJNmVqgg
ICLR.cc/2017/conference/-/paper212/official/review
{"title": "Final Review", "rating": "4: Ok but not good enough - rejection", "review": "Final review: The writers were very responsive and I agree the reviewer2 that their experimental setup is not wrong after all and increased the score by one. But I still think there is lack of experiments and the results are not conclusive. As a reader I am interested in two things, either getting a new insight and understanding something better, or learn a method for a better performance. This paper falls in the category two, but fails to prove it with more throughout and rigorous experiments. In summary the paper lacks experiments and results are inconclusive and I do not believe the proposed method would be quite useful and hence not a conference level publication. \n\n--\nThe paper proposes to train a policy network along the main network for selecting subset of data during training for achieving faster convergence with less data.\n\nPros:\nIt's well written and straightforward to follow\nThe algorithm has been explained clearly.\n\nCons:\nSection 2 mentions that the validation accuracy is used as one of the feature vectors for training the NDF. This invalidates the experiments, as the training procedure is using some data from the validation set.\n\nOnly one dataset has been tested on. Papers such as this one that claim faster convergence rate should be tested on multiple datasets and network architectures to show consistency of results. Especially larger datasets as the proposed methods is going to use less training data at each iteration, it has to be shown in much larger scaler datasets such as Imagenet.\n\nAs discussed more in detail in the pre-reviews question, if the paper is claiming faster convergence then it has to compare the learning curves with other baselines such Adam. Plain SGD is very unfair comparison as it is almost never used in practice. And this is regardless of what is the black box optimizer they use. The case could be that Adam alone as black box optimizer works as well or better than Adam as black box + NDF.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Neural Data Filter for Bootstrapping Stochastic Gradient Descent
["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"]
Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.
["Reinforcement Learning", "Deep learning", "Optimization"]
https://openreview.net/forum?id=SyJNmVqgg
https://openreview.net/pdf?id=SyJNmVqgg
https://openreview.net/forum?id=SyJNmVqgg&noteId=HyoMSTSVl
SyBXdRUEx
SyJNmVqgg
ICLR.cc/2017/conference/-/paper212/official/review
{"title": "data filtering for faster sgd", "rating": "7: Good paper, accept", "review": "Paper is easy to follow, Idea is pretty clear and makes sense.\nExperimental results are hard to judge, it would be nice to have other baselines.\nFor faster training convergence, the question is how well tuned SGD is, I didn't\nsee any mentioning of learning rate schedule. Also, it would be important to test\nthis on other data sets. Success with filtering training data could be task dependent."}
review
2017
ICLR.cc/2017/conference
Neural Data Filter for Bootstrapping Stochastic Gradient Descent
["Yang Fan", "Fei Tian", "Tao Qin", "Tie-Yan Liu"]
Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.
["Reinforcement Learning", "Deep learning", "Optimization"]
https://openreview.net/forum?id=SyJNmVqgg
https://openreview.net/pdf?id=SyJNmVqgg
https://openreview.net/forum?id=SyJNmVqgg&noteId=SyBXdRUEx
SySmUNZNg
ByOK0rwlx
ICLR.cc/2017/conference/-/paper43/official/review
{"title": "Novel quantization method to reduce memory and complexity of pre-trained networks, but benefit over other methods is unclear", "rating": "4: Ok but not good enough - rejection", "review": "This paper explores a new quantization method for both the weights and the activations that does not need re-training. In VGG-16 the method reaches compression ratios of 20x and experiences a speed-up of 15x. The paper is very well written and clearly exposes the details of the methodology and the results.\n\nMy major criticisms are three-fold: for one, the results are not compared to one of the many other pruning methods that are described in section 1.1, and as such the performance of the method is difficult to judge from the paper alone. Second, there have been several other compression schemes involving pruning, re-training and vector-quantization [e.g. 1, 2, 3] that seem to achieve much higher accuracies, compression ratios and speed-ups. Hence, for the practical application of running such networks on low-power, low-memory devices, other methods seem to be much more suited. The advantage of the given method - other then possibly reducing the time it takes to compress the network - is thus unclear. In particular, taking a pre-trained network as a starting point for a quantized model that is subsequently fine-tuned might not take much longer to process then the method given here (but maybe the authors can quantify this?). Finally, much of the speed-up and memory reduction in the VGG-model seems to arise from the three fully-connected layers, in particular the last one. The speed-up in the convolutional layers is comparably small, making me wonder how well the method would work in all-convolutional networks such as the Inception architecture.\n\n[1] Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, https://arxiv.org/abs/1510.00149\n[2] Compressing Deep Convolutional Networks using Vector Quantization, https://arxiv.org/abs/1412.6115\n[3] XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, https://arxiv.org/abs/1603.05279", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Ternary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network
["Mitsuru Ambai", "Takuya Matsumoto", "Takayoshi Yamashita", "Hironobu Fujiyoshi"]
This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued matrices, our method factorizes both weights and activations into integer and noninteger components. In our method, the real-valued weight matrix is approximated by a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes 2 bits per element. At test-time, an activation vector that passed from a previous layer is also transformed into a weighted sum of binary vectors, {-1, +1}, which enables fast feed-forward propagation based on simple logical operations: AND, XOR, and bit count. This makes it easier to deploy a deep network on low-power CPUs or to design specialized hardware. In our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15x acceleration and memory compression up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error.
["Deep learning"]
https://openreview.net/forum?id=ByOK0rwlx
https://openreview.net/pdf?id=ByOK0rwlx
https://openreview.net/forum?id=ByOK0rwlx&noteId=SySmUNZNg
rk9ryJzNx
ByOK0rwlx
ICLR.cc/2017/conference/-/paper43/official/review
{"title": "Clarify my comments", "rating": "5: Marginally below acceptance threshold", "review": "I do need to see the results in a clear table. Original results and results when compression is applied for all the tasks. In any case, i would like to see the results when the compression is applied to state of the art nets where the float representation is important. For instance a network with 0.5% - 0.8% in MNIST. A Imagenet lower that 5% - 10%. Some of this results are feasible with float representation but probably imposible for restricted representations.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Ternary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network
["Mitsuru Ambai", "Takuya Matsumoto", "Takayoshi Yamashita", "Hironobu Fujiyoshi"]
This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued matrices, our method factorizes both weights and activations into integer and noninteger components. In our method, the real-valued weight matrix is approximated by a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes 2 bits per element. At test-time, an activation vector that passed from a previous layer is also transformed into a weighted sum of binary vectors, {-1, +1}, which enables fast feed-forward propagation based on simple logical operations: AND, XOR, and bit count. This makes it easier to deploy a deep network on low-power CPUs or to design specialized hardware. In our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15x acceleration and memory compression up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error.
["Deep learning"]
https://openreview.net/forum?id=ByOK0rwlx
https://openreview.net/pdf?id=ByOK0rwlx
https://openreview.net/forum?id=ByOK0rwlx&noteId=rk9ryJzNx
HJ5-4JL4e
ByOK0rwlx
ICLR.cc/2017/conference/-/paper43/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "This paper addresses to reduce test-time computational load of DNNs. Another factorization approach is proposed and shows good results. The comparison to the other methods is not comprehensive, the paper provides good insights.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Ternary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network
["Mitsuru Ambai", "Takuya Matsumoto", "Takayoshi Yamashita", "Hironobu Fujiyoshi"]
This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued matrices, our method factorizes both weights and activations into integer and noninteger components. In our method, the real-valued weight matrix is approximated by a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes 2 bits per element. At test-time, an activation vector that passed from a previous layer is also transformed into a weighted sum of binary vectors, {-1, +1}, which enables fast feed-forward propagation based on simple logical operations: AND, XOR, and bit count. This makes it easier to deploy a deep network on low-power CPUs or to design specialized hardware. In our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15x acceleration and memory compression up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error.
["Deep learning"]
https://openreview.net/forum?id=ByOK0rwlx
https://openreview.net/pdf?id=ByOK0rwlx
https://openreview.net/forum?id=ByOK0rwlx&noteId=HJ5-4JL4e
SyxnNWM4e
HJ7O61Yxe
ICLR.cc/2017/conference/-/paper73/official/review
{"title": "Interesting idea but formulation and experiments not convincing", "rating": "4: Ok but not good enough - rejection", "review": "This manuscript proposes an approach for modeling correlated timeseries through a combination of loss functions which depend on neural networks. The loss functions correspond to: data fit term, autoregressive latent state term, and a term which captures relations between pairs of timeseries (relations have to be given as prior information).\n\nModeling relational timeseries is a well-researched problem, however little attention has been given to it in the neural network community. Perhaps the reason for this is the importance of having uncertainty in the representation. The authors correctly identify this need and consider an approach which considers distributions in the state space.\n\nThe formulation is quite straightforward by combining loss functions. The model adds to Ziat et al. 2016 in certain aspects which are well motivated, but unfortunately implemented in an unconvincing way. To start with, uncertainty is not treated in a very principled way, since the inference in the model is rather naive; I'd expect employing a VAE framework [1] for better uncertainty handling. Furthermore, the Gaussian co-variance collapses into a variance, which is the opposite of what one would want for modelling correlated time-series. There are approaches which take these correlations into account in the states, e.g. [2].\n\nMoreover, the treatment of uncertainty only allows for linear decoding function f. This significantly reduces the power of the model. State of the art methods in timeseries modeling have moved beyond this constraint, especially in the Gaussian process community e.g. [2,3,4,5]. Comparing to a few of these methods, or at least discussing them would be useful.\n\n\nReferences:\n[1] Kingma and Welling. Auto-encoding Variational Bayes. arXiv:1312.6114\n[2] Damianou et al. Variational Gaussian process dynamical systems. NIPS 2011.\n[3] Mattos et al. Recurrent Gaussian processes. ICLR 2016.\n[4] Frigola. Bayesian Time Series Learning with Gaussian Processes, University of Cambridge, PhD Thesis, 2015. \n[5] Frigola et al. Variational Gaussian Process State-Space Models. NIPS 2014\n\n\nOne innovation is that the prior structure of the correlation needs to be given. This is a potentially useful and also original structural component. However, it also constitutes a limitation in some sense, since it is unrealistic in many scenarios to have this prior information. Moreover, the particular regularizer that makes \"similar\" timeseries to have closeness in the state space seems problematic. Some timeseries groups might be more \"similar\" than others, and also the similarity might be of different nature across groups. These variations cannot be well captured/distilled by a simple indicator variable e_ij. Furthermore, these variables are in practice taken to be binary (by looking at the experiments), which would make it even harder to model rich correlations. \n\nThe experiments show that the proposed method works, but they are not entirely convincing. Importantly, they do not shed enough light into the different properties of the model w.r.t its different parts. For example, the effect and sensitivity of the different regularizers. The authors state in a pre-review answer that they amended with some more results, but I can't see a revision in openreview (please let me know if I've missed it). From the performance point of view, the results are not particularly exciting, especially given the fact that it's not clear which loss is better (making it difficult to use the method in practice). \n\nIt would also be very interesting to report the optimized values of the parameters \\lambda, to get an idea of how the different losses behave.\n\nTimeseries analysis is a very well-researched area. Given the above, it's not clear to me why one would prefer to use this model over other approaches. Methodology wise, there are no novel components that offer a proven advantage with respect to past methods. The uncertainty in the states and the correlation of the time-series are the aspects which could add an advantage, but are not adequately researched in this paper.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
https://openreview.net/forum?id=HJ7O61Yxe
https://openreview.net/pdf?id=HJ7O61Yxe
https://openreview.net/forum?id=HJ7O61Yxe&noteId=SyxnNWM4e
rkHCIUMVg
HJ7O61Yxe
ICLR.cc/2017/conference/-/paper73/official/review
{"title": "Important line of research, muddled presentation and unconvincing empirical results", "rating": "4: Ok but not good enough - rejection", "review": "Because the authors did not respond to reviewer feedback, I am maintaining my original review score.\n\n-----\n\nThis paper proposes to model relational (i.e., correlated) time series using a deep learning-inspired latent variable approach: they design a flexible parametric (but not generative) model with Gaussian latent factors and fit it using a rich training objective including terms for reconstruction (of observed time series) error, smoothness in the latent state space (via a KL divergence term encouraging neighbor states to be similarly distributed), and a final regularizer that encourages related time series to have similar latent state trajectories. Relations between trajectories are hard coded based on pre-existing knowledge, i.e., latent state trajectories for neighboring (wind speed) base stations should be similar. The model appears to be fit using gradient simple descent. The authors propose several elaborations, including a nonlinear transition function (based on an MLP) and a reconstruction error term that takes variance into account. However, the model is restricted to using a linear decoder. Experimental results are positive but not convincing.\n\nStrengths:\n- The authors target a worthwhile and challenging problem: incorporating the modeling of uncertainty over hidden states with the power of flexible neural net-like models.\n- The idea of representing relationships between hidden states using KL divergence between their (distributions over) corresponding hidden states is clever. Combined with the Gaussian distribution over hidden states, the resulting regularization term is simple and differentiable.\n- This general approach -- focusing on writing down the problem as a neural network-like loss function -- seems robust and flexible and could be combined with other approaches, including variants of variational autoencoders.\n\nWeaknesses:\n- The presentation is a muddled, especially the model definition in Sec. 3.3. The authors introduce four variants of their model with different combinations of decoder (with and without variance term) and linear vs. MLP transition function. It appears that the 2,2 variant is generally better but not on all metrics and often by small margins. This makes drawing a solid conclusions difficult: what each component of the loss contributes, whether and how the nonlinear transition function helps and how much, how in practice the model should be applied, etc. I would suggest two improvements to the manuscript: (1) focus on the main 2,2 variant in Sec. 3.3 (with the hypothesis that it should perform best) and make the simpler variants additional \"baselines\" described in a paragraph in Sec. 4.1; (2) perform more thorough experiments with larger data sets to make a stronger case for the superiority of this approach.\n- The authors only allude to learning (with references to gradient descent and ADAM during model description) in this framework. Inference gets its one subsection but only one sentence that ends in an ellipsis (?).\n- It's unclear what is the purpose of introducing the inequality in Eq. 9.\n- Experimental results are not convincing: given the size of the data, the differences vs. the RNN and KF baselines is probably not significant, and these aren't particularly strong baselines (especially if it is in fact an RNN and not an LSTM or GRU).\n- The position of this paper is unclear with respect to variational autoencoders and related models. Recurrent variants of VAEs (e.g., Krishnan, et al., 2015) seem to achieve most of the same goals as far as uncertainty modeling is concerned. It seems like those could easily be extended to model relationships between time series using the simple regularization strategy used here. Same goes for Johnson, et al., 2016 (mentioned in separate question).\n\nThis is a valuable research direction with some intriguing ideas and interesting preliminary results. I would suggest that the authors restructure this manuscript a bit, striving for clarity of model description similar to the papers cited above and providing greater detail about learning and inference. They also need to perform more thorough experiments and present results that tell a clear story about the strengths and weaknesses of this approach.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
https://openreview.net/forum?id=HJ7O61Yxe
https://openreview.net/pdf?id=HJ7O61Yxe
https://openreview.net/forum?id=HJ7O61Yxe&noteId=rkHCIUMVg
ryC8AjbVx
HJ7O61Yxe
ICLR.cc/2017/conference/-/paper73/official/review
{"title": "Interesting model, further experiments required", "rating": "4: Ok but not good enough - rejection", "review": "In absence of authors' response, the rating is maintained.\n\n---\n\nThis paper introduces a nonlinear dynamical model for multiple related multivariate time series. It models a linear observation model conditioned on the latent variables, a linear or nonlinear dynamical model between consecutive latent variables and a similarity constraint between any two time series (provided as prior data and non-learnable). The predictions/constraints given by the three components of the model are Gaussian, because the model predicts both the mean and the variance or covariance matrix. Inference is forward only.\n\nThe model is evaluated on four datasets, and compared to several baselines: plain auto-regressive models, feed-forward networks, RNN and dynamic factor graphs DFGs, which are RNNs with forward and backward inference of the latent variables.\n\nThe model, which introduces lateral constraints between different time series, and which predicts both the mean and covariance seems interesting, but presents two limitations.\n\nFirst of all, the paper should refer to variational auto-encoders / deep gaussian models, which also predict the mean and the variance during inference.\n\nSecondly, the datasets are extremely small. For example, the WHO contains only 91 times series of 52*10 = 520 time points. Although the experiments seem to suggest that the proposed model tends to outperform RNNs, the datasets are very small and the high variance in the results indicates that further experiments, with longer time series, are required. The paper could also easily be extended with more information about the model (what is the architecture of the MLP) as well as time complexity comparison between the models (especially between DFGs and this model).\n\nMinor remark:\nThe footnote 2 on page 5 seems to refer to the structural regularization term, not to the dynamical term.\n\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Modelling Relational Time Series using Gaussian Embeddings
["Ludovic Dos Santos", "Ali Ziat", "Ludovic Denoyer", "Benjamin Piwowarski", "Patrick Gallinari"]
We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc. We propose a new dynamical state space model, based on representation learning, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components (dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unobserved values together with a confidence in the prediction.
["Applications", "Deep learning"]
https://openreview.net/forum?id=HJ7O61Yxe
https://openreview.net/pdf?id=HJ7O61Yxe
https://openreview.net/forum?id=HJ7O61Yxe&noteId=ryC8AjbVx
Hkxf8DNNe
BJ6oOfqge
ICLR.cc/2017/conference/-/paper176/official/review
{"title": "simple approach showing some decent results", "rating": "7: Good paper, accept", "review": "This paper presents a model for semi-supervised learning by encouraging feature invariance to stochastic perturbations of the network and/or inputs. Two models are described: One where an invariance term is applied between different instantiations of the model/input a single training step, and a second where invariance is applied to features for the same input point across training steps via a cumulative exponential averaging of the features. These models evaluated using CIFAR-10 and SVHN, finding decent gains of similar amounts in each case. An additional application is also explored at the end, showing some tolerance to corrupted labels as well.\n\nThe authors also discuss recent work by Sajjadi &al that is very similar in spirit, which I think helps corroborate the findings here.\n\nMy largest critique is it would have been nice to see applications on larger datasets as well. CIFAR and SVHN are fairly small test cases, though adequate for demonstration of the idea. For cases of unlabelled data especially, it would be good to see tests with on the order of 1M+ data samples, with 1K-10K labeled, as this is a common case when labels are missing.\n\nOn a similar note, data augmentations are restricted to only translations and (for CIFAR) horizontal flips. While \"standard,\" as the paper notes, more augmentations would have been interesting to see --- particularly since the model is designed explicitly to take advantage of random sampling. Some more details might also pop up, such as the one the paper mentions about handling horizontal flips in different ways between the two model variants. Rather than restrict the system to a particular set of augmentations, I think it would be interesting to push it further, and see how its performance behaves over a larger array of augmentations and (even fewer) numbers of labels.\n\nOverall, this seems like a simple approach that is getting decent results, though I would have liked to see more and larger experiments to get a better sense for its performance characteristics.\n\n\n\nSmaller comment: the paper mentions \"dark knowledge\" a couple times in explaining results, e.g. bottom of p.6. This is OK for a motivation, but in analyzing the results I think it may be possible to have something more concrete. For instance, the consistency term encourages feature invariance to the stochastic sampling more strongly than would a classification loss alone.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
https://openreview.net/forum?id=BJ6oOfqge
https://openreview.net/pdf?id=BJ6oOfqge
https://openreview.net/forum?id=BJ6oOfqge&noteId=Hkxf8DNNe
B1u6EURmg
BJ6oOfqge
ICLR.cc/2017/conference/-/paper176/official/review
{"title": "", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper presents a semi-supervised technique for \u201cself-ensembling\u201d where the model uses a consensus prediction (computed from previous epochs) as a target to regress to, in addition to the usual supervised learning loss. This has connections to the \u201cdark knowledge\u201d idea, ladder networks work is shown in this paper to be a promising technique for scenarios with few labeled examples (but not only). The paper presents two versions of the idea: one which is computationally expensive (and high variance) in that it needs two passes through the same example at a given step, and a temporal ensembling method that is stabler, cheaper computationally but more memory hungry and requires an extra hyper-parameter. \n\n\nMy thoughts on this work are mostly positive. The drawbacks that I see are that the temporal ensembling work requires potentially a lot of memory, and non-trivial infrastructure / book-keeping for imagenet-sized experiments. I am quite confused by the Figure 2 / Section 3.4 experiments about tolerance to noisy labels: it\u2019s *very* incredible to me that by making 90% of the labels random one can still train a classifier that is either 30% accurate or ~78% accurate (depending on whether or not temporal ensembling was used). I don\u2019t see how that can happen, basically.\n\n\nMinor stuff:\nPlease bold the best-in-category results in your tables. \nI think it would be nice to talk about the ramp-up of w(t) in the main paper. \nThe authors should consider putting the state of the art results for the fully-supervised case in their tables, instead of just their own.\nI am confused as to why the authors chose not to use more SVHN examples. The stated reason that it\u2019d be \u201ctoo easy\u201d seems a bit contrived: if they used all examples it would also make it easy to compare to previous work.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
https://openreview.net/forum?id=BJ6oOfqge
https://openreview.net/pdf?id=BJ6oOfqge
https://openreview.net/forum?id=BJ6oOfqge&noteId=B1u6EURmg
SyezfkfEg
BJ6oOfqge
ICLR.cc/2017/conference/-/paper176/official/review
{"title": "Review", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This work explores taking advantage of the stochasticity of neural network outputs under randomized augmentation and regularization techniques to provide targets for unlabeled data in a semi-supervised setting. This is accomplished by either applying stochastic augmentation and regularization on a single image multiple times per epoch and encouraging the outputs to be similar (\u03a0-model) or by keeping a weighted average of past epoch outputs and penalizing deviations of current network outputs from this running mean (temporal ensembling). The core argument is that these approaches produce ensemble predictions which are likely more accurate than the current network and are thus good targets for unlabeled data. Both approaches seem to work quite well on semi-supervised tasks and some results show that they are almost unbelievably robust to label noise.\n\nThe paper is clearly written and provides sufficient details to reproduce these results in addition to providing a public code base. The core idea of the paper is quite interesting and seems to result in higher semi-supervised accuracy than prior work. I also found the attention to and discussion of the effect of different choices of data augmentation to be useful.\t\n\nI am a little surprised that a standard supervised network can achieve 30% accuracy on SVHN given 90% random training labels. This would only give 19% correctly labeled data (9% by chance + 10% unaltered). I suppose the other 81% would not provide a consistent training signal such that it is possible, but it does seem quite unintuitive. I tried to look through the github for this experiment but it does not seem to be included. \n\nAs for the resistance of \u03a0-model and temporal ensembling to this label noise, I find that somewhat more believable given the large weights placed on the consistency constraint for this task. The authors should really include discussion of w(t) in the main paper. Especially because the tremendous difference in w_max in the incorrect label tolerance experiment (10x for \u03a0-model and 100x for temporal ensembling from the standard setting).\n\nCould the authors comment towards the scalability for larger problems? For ImageNet, you would need to store around 4.8 gigs for the temporal ensembling method or spend 2x as long training with \u03a0-model.\n\nCan the authors discuss sensitivity of this approach to the amount and location of dropout layers in the architecture? \n\nPreliminary rating:\nI think this is a very interesting paper with quality results and clear presentation. \n\nMinor note:\n2nd paragraph of page one 'without neither' -> 'without either'\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Temporal Ensembling for Semi-Supervised Learning
["Samuli Laine", "Timo Aila"]
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
["labels", "unknown labels", "training", "temporal", "learning temporal", "learning", "simple", "efficient", "deep neural networks", "setting"]
https://openreview.net/forum?id=BJ6oOfqge
https://openreview.net/pdf?id=BJ6oOfqge
https://openreview.net/forum?id=BJ6oOfqge&noteId=SyezfkfEg
B1atIp-Ve
BJuysoFeg
ICLR.cc/2017/conference/-/paper126/official/review
{"title": "An interesting paper that shows improvements, but I am not sure about its technical advantage", "rating": "5: Marginally below acceptance threshold", "review": "Overall I think this is an interesting paper which shows empirical performance improvement over baselines. However, my main concern with the paper is regarding its technical depth, as the gist of the paper can be summarized as the following: instead of keeping the batch norm mean and bias estimation over the whole model, estimate them on a per-domain basis. I am not sure if this is novel, as this is a natural extension of the original batch normalization paper. Overall I think this paper is more fit as a short workshop presentation rather than a full conference paper.\n\nDetailed comments:\n\nSection 3.1: I respectfully disagree that the core idea of BN is to align the distribution of training data. It does this as a side effect, but the major purpose of BN is to properly control the scale of the gradient so we can train very deep models without the problem of vanishing gradients. It is plausible that intermediate features from different datasets naturally show as different groups in a t-SNE embedding. This is not the particular feature of batch normalization: visualizing a set of intermediate features with AlexNet and one gets the same results. So the premise in section 3.1 is not accurate.\n\nSection 3.3: I have the same concern as the other reviewer. It seems to be quite detatched from the general idea of AdaBN. Equation 2 presents an obvious argument that the combined BN-fully_connected layer forms a linear transform, which is true in the original BN case and in this case as well. I do not think it adds much theoretical depth to the paper. (In general the novelty of this paper seems low)\n\nExperiments:\n\n- section 4.3.1 is not an accurate measure of the \"effectiveness\" of the proposed method, but a verification of a simple fact: previously, we normalize the source domain features into a Gaussian distribution. the proposed method is explicitly normalizing the target domain features into the same Gaussian distribution as well. So, it is obvious that the KL divergence between these two distributions are closer - in fact, one is *explicitly* making them close. However, this does not directly correlate to the effectiveness of the final classification performance.\n\n- section 4.3.2: the sensitivity analysis is a very interesting read, as it suggests that only a very few number of images are needed to account for the domain shift in the AdaBN parameter estimation. This seems to suggest that a single \"whitening\" operation is already good enough to offset the domain bias (in both cases shown, a single batch is sufficient to recover about 80% of the performance gain, although I cannot get data for even smaller number of examples from the figure). It would thus be useful to have a comparison between these approaches, and also a detailed analysis of the effect from each layer of the model - the current analysis seems a bit thin.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Revisiting Batch Normalization For Practical Domain Adaptation
["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"]
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"]
https://openreview.net/forum?id=BJuysoFeg
https://openreview.net/pdf?id=BJuysoFeg
https://openreview.net/forum?id=BJuysoFeg&noteId=B1atIp-Ve
rkpVV6H4l
BJuysoFeg
ICLR.cc/2017/conference/-/paper126/official/review
{"title": "trivially simple yet effective", "rating": "6: Marginally above acceptance threshold", "review": "This paper proposes a simple domain adaptation technique in which batch normalization is performed separately in each domain.\n\n\nPros:\n\nThe method is very simple and easy to understand and apply.\n\nThe experiments demonstrate that the method compares favorably with existing methods on standard domain adaptation tasks.\n\nThe analysis in section 4.3.2 shows that a very small number of target domain samples are needed for adaptation of the network.\n\n\nCons:\n\nThere is little novelty -- the method is arguably too simple to be called a \u201cmethod.\u201d Rather, it\u2019s the most straightforward/intuitive approach when using a network with batch normalization for domain adaptation. The alternative -- using the BN statistics from the source domain for target domain examples -- is less natural, to me. (I guess this alternative is what\u2019s done in the Inception BN results in Table 1-2?)\n\nThe analysis in section 4.3.1 is superfluous except as a sanity check -- KL divergence between the distributions should be 0 when each distribution is shifted/scaled to N(0,1) by BN.\n\nSection 3.3: it\u2019s not clear to me what point is being made here.\n\n\nOverall, there\u2019s not much novelty here, but it\u2019s hard to argue that simplicity is a bad thing when the method is clearly competitive with or outperforming prior work on the standard benchmarks (in a domain adaptation tradition that started with \u201cFrustratingly Easy Domain Adaptation\u201d). If accepted, Sections 4.3.1 and 3.3 should be removed or rewritten for clarity for a final version.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Revisiting Batch Normalization For Practical Domain Adaptation
["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"]
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"]
https://openreview.net/forum?id=BJuysoFeg
https://openreview.net/pdf?id=BJuysoFeg
https://openreview.net/forum?id=BJuysoFeg&noteId=rkpVV6H4l
rJW8h4GEl
BJuysoFeg
ICLR.cc/2017/conference/-/paper126/official/review
{"title": "Final review", "rating": "4: Ok but not good enough - rejection", "review": "Update: I thank the authors for their comments. I still think that the method needs more experimental evaluation: for now, it's restricted to the settings in which one can use pre-trained ImageNet model, but it's also important to show the effectiveness in scenarios where one needs to train everything from scratch. I would love to see a fair comparison of the state-of-the-art methods on toy datasets (e.g. see (Bousmalis et al., 2016), (Ganin & Lempitsky, 2015)). In my opinion, it's a crucial point that doesn't allow me to increase the rating.\n\nThis paper proposes a simple trick turning batch normalization into a domain adaptation technique. The authors show that per-batch means and variances normally computed as a part of the BN procedure are sufficient to discriminate the domain. This observation leads to an idea that adaptation for the target domain can be performed by replacing population statistics computed on the source dataset by the corresponding statistics from the target dataset.\n\nOverall, I think the paper is more suitable for a workshop track rather than for the main conference track. My main concerns are the following:\n\n1. Although the main idea is very simple, it feels like the paper is composed in such a way to make the main contribution less obvious (e.g. the idea could have been described in the abstract but the authors avoided doing so). \n\n2. (This one is from the pre-review questions) The authors are using much stronger base CNN which may account for the bulk of the reported improvement. In order to prove the effectiveness of the trick, the authors would need to conduct a fair comparison against competing methods. It would be highly desirable to conduct this comparison also in the case of a model trained from scratch (as opposed to reusing ImageNet-trained networks).\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Revisiting Batch Normalization For Practical Domain Adaptation
["Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou"]
Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
["dnn", "batch normalization", "network", "adabn", "practical domain adaptation", "unprecedented success", "image classification", "object detection"]
https://openreview.net/forum?id=BJuysoFeg
https://openreview.net/pdf?id=BJuysoFeg
https://openreview.net/forum?id=BJuysoFeg&noteId=rJW8h4GEl
Hk4IU9t4g
SJJN38cge
ICLR.cc/2017/conference/-/paper327/official/review
{"title": "review", "rating": "3: Clear rejection", "review": "This work proposes to use basic probability assignment to improve deep transfer learning. A particular re-weighting scheme inspired by Dempster-Shaffer and exploiting the confusion matrix of the source task is introduced. The authors also suggest learning the convolutional filters separately to break non-convexity. \n\nThe main problem with this paper is the writing. There are many typos, and the presentation is not clear. For example, the way the training set for weak classifiers are constructed remains unclear to me despite the author's previous answer. I do not buy the explanation about the use of both training and validation sets to compute BPA. Also, I am not convinced non-convexity is a problem here and the author does not provide an ablation study to validate the necessity of separately learning the filters. One last question is CIFAR has three channels and MNIST only one: How it this handled when pairing the datasets in the second set of experiments?\n\nOverall, I believe the proposed idea of reweighing is interesting, but the work can be globally improved/clarified. \n\nI suggest a reject. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Distributed Transfer Learning for Deep Convolutional Neural Networks by Basic Probability Assignment
["Arash Shahriari"]
Transfer learning is a popular practice in deep neural networks, but fine-tuning of a large number of parameters is a hard challenge due to the complex wiring of neurons between splitting layers and imbalance class distributions of original and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy.
["Deep learning", "Transfer Learning", "Supervised Learning", "Optimization"]
https://openreview.net/forum?id=SJJN38cge
https://openreview.net/pdf?id=SJJN38cge
https://openreview.net/forum?id=SJJN38cge&noteId=Hk4IU9t4g
S1m50VGEl
SJJN38cge
ICLR.cc/2017/conference/-/paper327/official/review
{"title": "Final review.", "rating": "3: Clear rejection", "review": "Update: I thank the author for his comments! At this point, the paper is still not suitable for publication, so I'm leaving the rating untouched.\n\nThis paper proposes a transfer learning method addressing optimization complexity and class imbalance.\n\nMy main concerns are the following:\n\n1. The paper is quite hard to read due to typos, unusual phrasing and loose use of terminology like \u201cdistributed\u201d, \u201ctransfer learning\u201d (meaning \u201cfine-tuning\u201d), \u201csoftmax\u201d (meaning \u201cfully-connected\u201d), \u201cdeep learning\u201d (meaning \u201cbase neural network\u201d), etc. I\u2019m still not sure I got all the details of the actual algorithm right.\n\n2. The captions to the figures and tables are not very informative \u2013 one has to jump back and forth through the paper to understand what the numbers/images mean.\n\n3. From what I understand, the authors use \u201cconventional transfer learning\u201d to refer to fine-tuning of the fully-connected layers only (I\u2019m judging by Figure 1). In this case, it\u2019s essential to compare the proposed method with regimes when some of the convolutional layers are also updated. This comparison is not present in the paper.\n\nComments on the pre-review questions:\n\n1. Question 1: If the paper only considers the case |C|==|L|, it would be better to reduce the notation clutter.\n\n2. Question 2: It is still not clear what the authors mean by distributed transfer learning. Figure 1 is supposed to highlight the difference from the conventional approach (fine-tuning of the fully-connected layers; by the way, I don\u2019t think, Softmax is a conventional term for fully-connected layers). From the diagram, it follows that the base CNN has the same number of convolutional filters at every layer and, in order to obtain a distributed ensemble, we need to connect (for some reason) filters with the same indices. This does not make a lot of sense to me but I\u2019m probably misinterpreting the figure. Could the authors revise the diagram to make it clearer?\n\nOverall, I think the paper needs significant refinement in order improve the clarity of presentation and thus cannot be accepted as it is now.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Distributed Transfer Learning for Deep Convolutional Neural Networks by Basic Probability Assignment
["Arash Shahriari"]
Transfer learning is a popular practice in deep neural networks, but fine-tuning of a large number of parameters is a hard challenge due to the complex wiring of neurons between splitting layers and imbalance class distributions of original and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy.
["Deep learning", "Transfer Learning", "Supervised Learning", "Optimization"]
https://openreview.net/forum?id=SJJN38cge
https://openreview.net/pdf?id=SJJN38cge
https://openreview.net/forum?id=SJJN38cge&noteId=S1m50VGEl
HJm7-kf4g
SJJN38cge
ICLR.cc/2017/conference/-/paper327/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposed to use the BPA criterion for classifier ensembles.\n\nMy major concern with the paper is that it attempts to mix quite a few concepts together, and as a result, some of the simple notions becomes a bit hard to understand. For example:\n\n(1) \"Distributed\" in this paper basically means classifier ensembles, and has nothing to do with the distributed training or distributed computation mechanism. Granted, one can train these individual classifiers in a distributed fashion but this is not the point of the paper.\n\n(2) The paper uses \"Transfer learning\" in its narrow sense: it basically means fine-tuning the last layer of a pre-trained classifier.\n\nAside from the concept mixture of the paper, other comments I have about the paper are:\n\n(1) I am not sure how BPA address class inbalance better than simple re-weighting. Essentially, the BPA criteria is putting equal weights on different classes, regardless of the number of training data points each class has. This is a very easy thing to address in conventional training: adding a class-specific weight term to each data point with the value being the inverse of the number of data points will do.\n\n(2) Algorithm 2 is not presented correctly as it implies that test data is used during training, which is not correct: only training and validation dataset should be used. I find the paper's use of \"train/validation\" and \"test\" quite confusing: why \"train/validation\" is always presented together? How to properly distinguish between them?\n\n(3) If I understand correctly, the paper is proposing to compute the BPA in a batch fashion, i.e. BPA can only be computed when running the model over the full train/validation dataset. This contradicts with the stochastic gradient descent that are usually used in deep net training - how does BPA deal with that? I believe that an experimental report on the computation cost and timing is missing.\n\nIn general, I find the paper not presented in its clearest form and a number of key definitions ambiguous.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Distributed Transfer Learning for Deep Convolutional Neural Networks by Basic Probability Assignment
["Arash Shahriari"]
Transfer learning is a popular practice in deep neural networks, but fine-tuning of a large number of parameters is a hard challenge due to the complex wiring of neurons between splitting layers and imbalance class distributions of original and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy.
["Deep learning", "Transfer Learning", "Supervised Learning", "Optimization"]
https://openreview.net/forum?id=SJJN38cge
https://openreview.net/pdf?id=SJJN38cge
https://openreview.net/forum?id=SJJN38cge&noteId=HJm7-kf4g
B1F8hRVEg
Hkg8bDqee
ICLR.cc/2017/conference/-/paper360/official/review
{"title": "novel idea but requires more details / experimentation", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper reads well and the idea is new.\nSadly, many details needed for replicating the results (such as layer sizes of the CNNs, learning rates) are missing. \nThe training of the introspection network could have been described in more detail. \nAlso, I think that a model, which is closer to the current state-of-the-art should have been used in the ImageNet experiments. That would have made the results more convincing.\nDue to the novelty of the idea, I recommend the paper. I would increase the rating if an updated draft addresses the mentioned issues.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Introspection:Accelerating Neural Network Training By Learning Weight Evolution
["Abhishek Sinha", "Aahitagni Mukherjee", "Mausoom Sarkar", "Balaji Krishnamurthy"]
Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hkg8bDqee
https://openreview.net/pdf?id=Hkg8bDqee
https://openreview.net/forum?id=Hkg8bDqee&noteId=B1F8hRVEg
SktFDJDNl
Hkg8bDqee
ICLR.cc/2017/conference/-/paper360/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "In this paper, the authors use a separate introspection neural network to predict the future value of the weights directly from their past history. The introspection network is trained on the parameter progressions collected from training separate set of meta learning models using a typical optimizer, e.g. SGD. \n\nPros:\n+ The organization is generally very clear\n+ Novel meta-learning approach that is different than the previous learning to learn approach\n\nCons: \n- The paper will benefit from more thorough experiments on other neural network architectures where the geometry of the parameter space are sufficiently different than CNNs such as fully connected and recurrent neural networks. \n- Neither MNIST nor CIFAR experimental section explained the architectural details\n- Mini-batch size for the experiments were not included in the paper\n- Comparison with different baseline optimizer such as Adam would be a strong addition or at least explain how the hyper-parameters, such as learning rate and momentum, are chosen for the baseline SGD method. \n\nOverall, due to the omission of the experimental details in the current revision, it is hard to draw any conclusive insight about the proposed method. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Introspection:Accelerating Neural Network Training By Learning Weight Evolution
["Abhishek Sinha", "Aahitagni Mukherjee", "Mausoom Sarkar", "Balaji Krishnamurthy"]
Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hkg8bDqee
https://openreview.net/pdf?id=Hkg8bDqee
https://openreview.net/forum?id=Hkg8bDqee&noteId=SktFDJDNl
Bkm_OazEx
Hkg8bDqee
ICLR.cc/2017/conference/-/paper360/official/review
{"title": "Valuable insight but needs careful analysis", "rating": "9: Top 15% of accepted papers, strong accept", "review": "EDIT: Updated score. See additional comment.\n\nI quite like the main idea of the paper, which is based on the observation in Sec. 3.0 - that the authors find many predictable patterns in the independent evolution of weights during neural network training. It is very encouraging that a simple neural network can be used to speed up training by directly predicting weights.\n\nHowever the technical quality of the current paper leaves much to be desired, and I encourage the authors to do more rigorous analysis of the approach. Here are some concrete suggestions:\n\n- The findings in Section 3.0 which motivate the approach, should be clearly presented in the paper. Presently they are stated as anecdotes.\n\n- A central issue with the paper is that the training of the Introspection network I is completely glossed over. How well did the training work, in terms of training, validation/test losses? How well does it need to work in order to be useful for speeding up training? These are important questions for anyone interested in this approach.\n\n- An additional important issue is that of baselines. Would a simple linear/quadratic model also work instead of a neural network? What about a simple heuristic rule to increase/decrease weights? I think it's important to compare to such baselines to understand the complexity of the weight evolution learned by the neural network.\n\n- I do not think that default tensorflow example hyperparameters should be used, as mentioned by authors on OpenReview. There is no scientific basis for using them. Instead, first hyperparameters which produce good results in a reasonable time should be selected as the baseline, and then added the benefit of the introspection network to speed up training (and reaching a similar result) should be shown.\n\n- The authors state in the discussion on OpenReview that they also tried RNNs as the introspection network but it didn't work with small state size. What does \"didn't work\" mean in this context? Did it underfit? I find it hard to imagine that a large state size would be required for this task. Even if it is, that doesn't rule out evaluation due to memory issues because the RNN can be run on the weights in 'mini-batch' mode. In general, I think other baselines are more important than RNN.\n\n- A question about jump points: \nThe I is trained on SGD trajectories. While using I to speed up training at several jump points, if the input weights cross previous jump points, then I gets input data from a weight evolution which is not from SGD (it has been altered by I). This seems problematic but doesn't seem to affect your experiments. I feel that this again highlights the importance of the baselines. Perhaps I is doing something extremely simple that is not affected by this issue.\n\nSince the main idea is very interesting, I will be happy to update my score if the above concerns are addressed. ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Introspection:Accelerating Neural Network Training By Learning Weight Evolution
["Abhishek Sinha", "Aahitagni Mukherjee", "Mausoom Sarkar", "Balaji Krishnamurthy"]
Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks. We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Hkg8bDqee
https://openreview.net/pdf?id=Hkg8bDqee
https://openreview.net/forum?id=Hkg8bDqee&noteId=Bkm_OazEx
BkjrLVG4x
HyWDCXjgx
ICLR.cc/2017/conference/-/paper552/official/review
{"title": "Contribution not clear enough; concerns about data set itself", "rating": "3: Clear rejection", "review": "The manuscript is a bit scattered and hard to follow. There is technical depth but the paper doesn't do a good job explaining what shortcoming the proposed methods are overcoming and what baselines they are outperforming. \n\nThe writing could be improved. There are numerous grammatical errors.\n\nThe experiments in 3.1 are interesting, but you need to be clearer about the relationship of your ResCeption method to the state-of-the-art. The use of extensive footnotes on page 5 is a bit odd. \"That is a competitive result\" is vague. A footnote links to \"http://image-net.org/challenges/LSVRC/2015/results\" which doesn't seem to even show the same task you are evaluating. ResCeption: \"The best validation error is reached at 23.37% and 6.17% at top-1 and top-5, respectively\". Single model ResNet-152 gets 19.38 and 4.49, respectively. Resnet-34 is 21.8 and 5.7, respectively. VGGv5 is 24.4 and 7.1, respectively. [source: Deep Residual Learning for Image Recognition, He et al. 2015]. I think it would be more honest for you to report results of competitors and say that your model is worse than ResNet and slightly better than VGG on ImageNet classification.\n\n3.5, retrieval on Holidays, is a bit too much of a diversion from the goal of this paper. If this paper is more about the novel architecture and less about the particular fashion attribute task then the narrative needs to change accordingly.\n\nPerhaps my biggest concern is that this paper is missing baselines (e.g. non recurrent models, attribute classification instead of detection) and comparisons to prior work by Berg et al.\n\n\"Our policy restricts to reveal much more details about the internal dataset\" This is a significant issue. The dataset used in this work cannot be shared? How are future works going to compare to your benchmark?\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-label learning with the RNNs for Fashion Search
["Taewan Kim"]
We build a large-scale visual search system which finds similar product images given a fashion item. Defining similarity among arbitrary fashion-products is still remains a challenging problem, even there is no exact ground-truth. To resolve this problem, we define more than 90 fashion-related attributes, and combination of these attributes can represent thousands of unique fashion-styles. We then introduce to use the recurrent neural networks (RNNs) recognising multi fashion-attributes with the end-to-end manner. To build our system at scale, these fashion-attributes are again used to build an inverted indexing scheme. In addition to these fashion-attributes for semantic similarity, we extract colour and appearance features in a region-of-interest (ROI) of a fashion item for visual similarity. By sharing our approach, we expect active discussion on that how to apply current deep learning researches into the e-commerce industry.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=HyWDCXjgx
https://openreview.net/pdf?id=HyWDCXjgx
https://openreview.net/forum?id=HyWDCXjgx&noteId=BkjrLVG4x
rJJNlzU4g
HyWDCXjgx
ICLR.cc/2017/conference/-/paper552/official/review
{"title": "interesting exploration but several major concerns", "rating": "4: Ok but not good enough - rejection", "review": "The paper presents a large-scale visual search system for finding product images given a fashion item. The exploration is interesting and the paper does a nice job of discussing the challenges of operating in this domain. The proposed approach addresses several of the challenges. \n\nHowever, there are several concerns.\n\n1) The main concern is that there are no comparisons or even mentions of the work done by Tamara Berg\u2019s group on fashion recognition and fashion attributes, e.g., \n- \u201cAutomatic Attribute Discovery and Characterization from Noisy Web Data\u201d ECCV 2010 \n- \u201cWhere to Buy It: Matching Street Clothing Photos in Online Shops\u201d ICCV 2015,\n- \u201cRetrieving Similar Styles to Parse Clothing, TPAMI 2014,\netc\nIt is difficult to show the contribution and novelty of this work without discussing and comparing with this extensive prior art.\n\n2) There are not enough details about the attribute dataset and the collection process. What is the source of the images? Are these clean product images or real-world images? How is the annotation done? What instructions are the annotators given? What annotations are being collected? I understand data statistics for example may be proprietary, but these kinds of qualitative details are important to understand the contributions of the paper. How can others compare to this work?\n\n3) There are some missing baselines. How do the results in Table 2 compare to simpler methods, e.g., the BM or CM methods described in the text?\n\nWhile the paper presents an interesting exploration, all these concerns would need to be addressed before the paper can be ready for publication.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-label learning with the RNNs for Fashion Search
["Taewan Kim"]
We build a large-scale visual search system which finds similar product images given a fashion item. Defining similarity among arbitrary fashion-products is still remains a challenging problem, even there is no exact ground-truth. To resolve this problem, we define more than 90 fashion-related attributes, and combination of these attributes can represent thousands of unique fashion-styles. We then introduce to use the recurrent neural networks (RNNs) recognising multi fashion-attributes with the end-to-end manner. To build our system at scale, these fashion-attributes are again used to build an inverted indexing scheme. In addition to these fashion-attributes for semantic similarity, we extract colour and appearance features in a region-of-interest (ROI) of a fashion item for visual similarity. By sharing our approach, we expect active discussion on that how to apply current deep learning researches into the e-commerce industry.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=HyWDCXjgx
https://openreview.net/pdf?id=HyWDCXjgx
https://openreview.net/forum?id=HyWDCXjgx&noteId=rJJNlzU4g
B1Mp8grVl
HyWDCXjgx
ICLR.cc/2017/conference/-/paper552/official/review
{"title": "Good practical visual search system but lack novelty", "rating": "3: Clear rejection", "review": "This paper introduces a pratical large-scale visual search system for a fashion site. It uses RNN to recognize multi-label attributes and uses state-of-art faster RCNN to extract features inside those region-of-interest (ROI). The technical contribution of this paper is not clear. Most of the approaches used are standard state-of-art methods and there are not a lot of novelties in applying those methods. For multi-label recognition task, there are other available methods, e.g. using binary models, changing cross-entropy loss function, etc. There aren't any comparison between the RNN method and other simple baselines. The order of the sequential RNN prediction is not clear either. It seems that the attributes form a tree hierarchy and that is used as the order of sequence.\n\nThe paper is not well written either. Most results are reported in the internal dataset and the authors won't release the dataset. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-label learning with the RNNs for Fashion Search
["Taewan Kim"]
We build a large-scale visual search system which finds similar product images given a fashion item. Defining similarity among arbitrary fashion-products is still remains a challenging problem, even there is no exact ground-truth. To resolve this problem, we define more than 90 fashion-related attributes, and combination of these attributes can represent thousands of unique fashion-styles. We then introduce to use the recurrent neural networks (RNNs) recognising multi fashion-attributes with the end-to-end manner. To build our system at scale, these fashion-attributes are again used to build an inverted indexing scheme. In addition to these fashion-attributes for semantic similarity, we extract colour and appearance features in a region-of-interest (ROI) of a fashion item for visual similarity. By sharing our approach, we expect active discussion on that how to apply current deep learning researches into the e-commerce industry.
["Computer vision", "Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=HyWDCXjgx
https://openreview.net/pdf?id=HyWDCXjgx
https://openreview.net/forum?id=HyWDCXjgx&noteId=B1Mp8grVl
r1UHA4XNg
rJEgeXFex
ICLR.cc/2017/conference/-/paper84/official/review
{"title": "Thorough empirical investigation of an interesting and (to my knowledge) novel application area", "rating": "7: Good paper, accept", "review": "This is a well written, organized, and presented paper that I enjoyed reading. I commend the authors on their attention to the narrative and the explanations. While it did not present any new methodology or architecture, it instead addressed an important application of predicting the medications a patient is using, given the record of billing codes. The dataset they use is impressive and useful and, frankly, more interesting than the typical toy datasets in machine learning. That said, the investigation of those results was not as deep as I thought it should have been in an empirical/applications paper. Despite their focus on the application, I was encouraged to see the authors use cutting edge choices (eg Keras, adadelta, etc) in their architecture. A few points of criticism:\n\n-The numerical results are in my view too brief. Fig 4 is anecdotal, Fig 5 is essentially a negative result (tSNE is only in some places interpretable), so that leaves Table 1. I recognize there is only one dataset, but this does not offer a vast amount of empirical evidence and analysis that one might expect out of a paper with no major algorithmic/theoretical advances. To be clear I don't think this is disqualifying or deeply concerning; I simply found it a bit underwhelming.\n\n- To be constructive, re the results I would recommend removing Fig 5 and replacing that with some more meaningful analysis of performance. I found Fig 5 to be mostly uninformative, other than as a negative result, which I think can be stated in a sentence rather than in a large figure.\n\n- There is a bit of jargon used and expertise required that may not be familiar to the typical ICLR reader. I saw that another reviewer suggested perhaps ICLR is not the right venue for this work. While I certainly see the reviewer's point that a medical or healthcare venue may be more suitable, I do want to cast my vote of keeping this paper here... our community benefits from more thoughtful and in depth applications. Instead I think this can be addressed by tightening up those points of jargon and making the results more easy to evaluate by an ICLR reader (that is, as it stands now researchers without medical experience have to take your results after Table 1 on faith, rather than getting to apply their well-trained quantitative eye). \n\nOverall, a nice paper.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks
["Jacek M. Bajor", "Thomas A. Lasko"]
It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.
["Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=rJEgeXFex
https://openreview.net/pdf?id=rJEgeXFex
https://openreview.net/forum?id=rJEgeXFex&noteId=r1UHA4XNg
S11T5vW4e
rJEgeXFex
ICLR.cc/2017/conference/-/paper84/official/review
{"title": "Good medical application paper for a medical or data science venue", "rating": "6: Marginally above acceptance threshold", "review": "This is a well-conducted and well-written study on the prediction of medication from diagnostic codes. The authors compared GRUs, LSTMs, feed-forward networks and random forests (making a case for why random forests should be used, instead of SVMs) and analysed the predictions and embeddings.\n\nThe authors also did address the questions of the reviewers.\n\nMy only negative point is that this work might be more relevant for a data science or medical venue rather than at ICLR.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks
["Jacek M. Bajor", "Thomas A. Lasko"]
It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.
["Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=rJEgeXFex
https://openreview.net/pdf?id=rJEgeXFex
https://openreview.net/forum?id=rJEgeXFex&noteId=S11T5vW4e
rJEGyBz4g
rJEgeXFex
ICLR.cc/2017/conference/-/paper84/official/review
{"title": "Strong application work, very important problem", "rating": "8: Top 50% of accepted papers, clear accept", "review": "In light of the detailed author responses and further updates to the manuscript, I am raising my score to an 8 and reiterating my support for this paper. I think it will be among the strongest non-traditional applied deep learning work at ICLR and will receive a great deal of interest and attention from attendees.\n\n-----\n\nThis paper describes modern deep learning approach to the problem of predicting the medications taken by a patient during a period of time based solely upon the sequence of ICD-9 codes assigned to the patient during that same time period. This problem is formulated as a multilabel sequence classification (in contrast to language modeling, which is multiclass classification). They propose to use standard LSTM and GRU architectures with embedding layers to handle the sparse categorical inputs, similar to that described in related work by Choi, et al. In experiments using a cohort of ~610K patient records, they find that RNN models outperform strong baselines including an MLP and a random forest, as well as a common sense baseline. The differences in performance between the recurrent models and the MLP appear to be large enough to be significant, given the size of the test set.\n\nStrengths:\n- Very important problem. As the authors point out, two the value propositions of EHRs -- which have been widely adopted throughout the US due to a combination of legislation and billions of dollars in incentives from the federal government -- included more accurate records and fewer medication mistakes. These two benefits have largely failed to materialize. This seems like a major opportunity for data mining and machine learning.\n- Paper is well-written with lucid introduction and motivation, thorough discussion of related work, clear description of experiments and metrics, and interesting qualitative analysis of results.\n- Empirical results are solid with a strong win for RNNs over convincing baselines. This is in contrast to some recent related papers, including Lipton & Kale et al, ICLR 2016, where the gap between the RNN and MLP was relatively small, and Choi et al, MLHC 2016, which omitted many obvious baselines.\n- Discussion is thorough and thoughtful. The authors are right about the kidney code embedding results: this is a very promising result.\n\nWeaknesses:\n- The authors make several unintuitive decisions related to data preprocessing and experimental design, foremost among them the choice NOT to use full patient sequences but instead only truncated patient sequences that each ends at randomly chosen time point. This does not necessarily invalidate their results, but it is somewhat unnatural and the explanation is difficult to follow, reducing the paper's potential impact. It is also reduces the RNN's potential advantage.\n- The chosen metrics seem appropriate, but non-experts may have trouble interpreting the absolute and relative performances (beyond the superficial, e.g., RNN score 0.01 more than NN!). The authors should invest some space in explaining (1) what level of performance -- for each metric -- would be necessary for the model to be useful in a real clinical setting and (2) whether the gaps between the various models are \"significant\" (even in an informal sense).\n- The paper proposes nothing novel in terms of methods, which is a serious weakness for a methods conference like ICLR. I think it is strong enough empirically (and sufficiently interesting in application) to warrant acceptance regardless, but there may be things the authors can do to make it more competitive. For example, one potential hypothesis is that higher capacity models are more prone to overfitting noisy targets. Is there some way to investigate this, perhaps by looking at the kinds of errors each model makes?\n\nI have a final comment: as a piece of clinical work, the paper has a huge weakness: the lack of ground truth labels for missing medications. Models are both trained and tested on data with noisy labels. For training, the authors are right that this shouldn't be a huge problem, provided the label noise is random (even class conditional isn't too big of a problem). For testing, though, this seems like it could skew metrics. Further, the assumption that the label noise is not systemic seems very unlikely given that these data are recorded by human clinicians. The cases shown in Appendix C lend some credence to this assertion: for Case 1, 7/26 actual medications received probabilities < 0.5. My hunch is that clinical reviewers would view the paper with great skepticism. The authors will need to get creative about evaluation -- or invest a lot of time/money in labeling data -- to really prove that this works.\n\nFor what it is worth, I hope that this paper is accepted as I think it will be of great interest to the ICLR community. However, I am borderline about whether I'd be willing to fight for its acceptance. If the authors can address the reviewers' critiques -- and in particular, dive into the question of overfitting the imperfect labels and provide some insights -- I might be willing to raise my score and lobby for acceptance.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Predicting Medications from Diagnostic Codes with Recurrent Neural Networks
["Jacek M. Bajor", "Thomas A. Lasko"]
It is a surprising fact that electronic medical records are failing at one of their primary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and that up to 25% of all active medications do not appear on the appropriate patient list. Manual efforts to maintain these lists involve a great deal of tedious human labor, which could be reduced by computational tools to suggest likely missing or incorrect medications on a patient’s list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a patient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predictions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of errors and omissions in the data, and the likelihood of models such as these to help correct them.
["Deep learning", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=rJEgeXFex
https://openreview.net/pdf?id=rJEgeXFex
https://openreview.net/forum?id=rJEgeXFex&noteId=rJEGyBz4g
BytMf6WVe
rJ8uNptgl
ICLR.cc/2017/conference/-/paper148/official/review
{"title": "interesting experimental evaluation of variable bit-rate CNN weight compression scheme", "rating": "7: Good paper, accept", "review": "This paper proposes a novel neural network compression technique.\nThe goal is to compress maximally the network specification via parameter quantisation with a minimum impact on the expected loss.\nIt assumes pruning of the network parameters has already been performed, and only considers the quantisation of the individual scalar parameters of the network.\nIn contrast to previous work (Han et al. 2015a, Gong et al. 2014) the proposed approach takes into account the effect of the weight quantisation on the loss function that is used to train the network, and also takes into account the effect on a variable-length binary encoding of the cluster centers used for the quantisation. \n\nUnfortunately, the submitted paper is 20 pages, rather than the 8 recommended. The length of the paper seems unjustified to me, since the first three sections (first five pages) are very generic and redundant can be largely compressed or skipped (including figures 1 and 2). Although not a strict requirement by the submission guidelines, I would suggest the authors to compress their paper to 8 pages, this will improve the readability of the paper.\n\nTo take into account the impact on the network\u2019s loss the authors propose to use a second order approximation of the cost function of the loss. In the case of weights that originally constitute a local minimum of the loss, this leads to a formulation of the impact of the weight quantization on the loss in terms of a weighted k-means clustering objective, where the weights are derived from the hessian of the loss function at the original weights.\nThe hessian can be computed efficiently using a back-propagation algorithm similar to that used to compute the gradient, as shown in cited work from the literature. \nThe authors also propose to alternatively use a second-order moment term used by the Adam optimisation algorithm, since it can be loosely interpreted as an approximate Hessian. \n\nIn section 4.5 the authors argue that with their approach it is more natural to quantise weights across all layers together, due to the hessian weighting which takes into account the variable impact across layers of quantisation errors on the network performance. \nThe last statement in this section, however, was not clear to me: \n\u201cIn such deep neural networks, quantising network parameters of all layers together is more efficient since optimizing layer-by-layer clustering jointly across all layers requires exponential time complexity with respect to the number of layers.\u201d\nPerhaps the authors could elaborate a bit more on this point?\n\nIn section 5 the authors develop methods to take into account the code length of the weight quantisation in the clustering process. \nThe first method described by the authors (based on previous work), is uniform quantisation of the weight space, which is then further optimised by their hessian-weighted clustering procedure from section 4. \nFor the case of nonuniform codeword lengths to encode the cluster indices, the authors develop a modification of the Hessian weighted k-means algorithm in which the code length of each cluster is also taken into account, weighted by a factor lambda. Different values of lambda give rise to different compression-accuracy trade-offs, and the authors propose to cluster weights for a variety of lambda values and then pick the most accurate solution obtained, given a certain compression budget. \n\nIn section 6 the authors report a number of experimental results that were obtained with the proposed methods, and compare these results to those obtained by the layer-wise compression technique of Han et al 2015, and to the uncompressed models. \nFor these experiments the authors used three datasets, MNIST, CIFAR10 and ImageNet, with data-set specific architectures taken from the literature. \nThese results suggest a consistent and significant advantage of the proposed method over the work of Han et al. Comparison to the work of Gong et al 2014 is not made.\nThe results illustrate the advantage of the hessian weighted k-means clustering criterion, and the advantages of the variable bitrate cluster encoding. \n\nIn conclusion I would say that this is quite interesting work, although the technical novelty seems limited (but I\u2019m not a quantisation expert).\nInterestingly, the proposed techniques do not seem specific to deep conv nets, but rather generically applicable to quantisation of parameters of any model with an associated cost function for which a locally quadratic approximation can be formulated. It would be useful if the authors would discuss this point in their paper.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Towards the Limit of Network Quantization
["Yoojin Choi", "Mostafa El-Khamy", "Jungwon Lee"]
Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks. It reduces the number of distinct network parameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When optimal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information theory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.
["Theory", "Deep learning"]
https://openreview.net/forum?id=rJ8uNptgl
https://openreview.net/pdf?id=rJ8uNptgl
https://openreview.net/forum?id=rJ8uNptgl&noteId=BytMf6WVe
ryJNu_b4x
rJ8uNptgl
ICLR.cc/2017/conference/-/paper148/official/review
{"title": "Effective quantization", "rating": "7: Good paper, accept", "review": "The paper has two main contributions:\n\n1) Shows that uniform quantization works well with variable length (Huffman) coding\n\n2) Improves fixed-length quantization by proposing the Hessian-weighted k-means, as opposed to standardly used vanilla k-means. The Hessian weighting is well motivated, and it is also explained how to use an efficient approximation \"for free\" when using the Adam optimizer, which is quite neat. As opposed to vanilla k-means, one of the main benefits of this approach (apart from improved performance) is that no tuning on per-layer compression rates is required, as this is achieved for free.\n\nTo conclude, I like the paper: (1) is not really novel but it doesn't seem other papers have done this before so it's nice to know it works well, and (2) is quite neat and also works well. The paper is easy to follow, results are good. My only complaint is that it's a bit too long.\n\nMinor note - I still don't understand the parts about storing \"additional bits for each binary codeword for layer indication\" when doing layer-by-layer quantization. What's the problem of just having an array of quantized weight values for each layer, i.e. q[0][:] would store all quantized weights for layer 0, q[1][:] for layer 1 etc, and for each layer you would have the codebook. So the only overhead over joint quantization is storing the codebook for each layer, which is insignificant. I don't understand the \"additional bit\" part. But anyway, this is really not a important as I don't think it affects the paper at all, just authors might want to additionally clarify this point (maybe I'm missing something obvious, but if I am then it's likely some other people will as well).\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Towards the Limit of Network Quantization
["Yoojin Choi", "Mostafa El-Khamy", "Jungwon Lee"]
Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks. It reduces the number of distinct network parameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When optimal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information theory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.
["Theory", "Deep learning"]
https://openreview.net/forum?id=rJ8uNptgl
https://openreview.net/pdf?id=rJ8uNptgl
https://openreview.net/forum?id=rJ8uNptgl&noteId=ryJNu_b4x
Syu1KV9rg
rJ8uNptgl
ICLR.cc/2017/conference/-/paper148/official/review
{"title": "Review for \"Towards the Limit of Network Quantization\"", "rating": "7: Good paper, accept", "review": "This paper proposes a network quantization method for compressing the parameters of neural networks, therefore, compressing the amount of storage needed for the parameters. The authors assume that the network is already pruned and aim for compressing the non-pruned parameters. The problem of network compression is a well-motivated problem and of interest to the ICLR community. \n\nThe main drawback of the paper is its novelty. The paper is heavily built on the results of Han 2015 and only marginally extends Han 2015 to overcome its drawbacks. It should be noted that the proposed method in this paper has not been proposed before. \n\nThe paper is well-structured and easy to follow. Although it heavily builds on Han 2015, it is still much longer than Han 2015. I believe that there is still some redundancy in the paper. The experiments section starts on Page 12 whereas for Han 2015 the experiments start on page 5. Therefore, I believe much of the introductory text is redundant and can be efficiently cut. \n\nExperimental results in the paper show good compression performance compared to Han 2015 while losing very little accuracy. Can the authors mention why there is no comparison with Hang 2015 on ResNet in Table 1?\n\nSome comments:\n1) It is not clear whether the procedure depicted in figure 1 is the authors\u2019 contribution or has been in the literature.\n2) In section 4.1 the authors approximate the hessian matrix with a diagonal matrix. Can the authors please explain how this approximation affects the final compression? Also how much does one lose by making such an approximation?\n\nminor typos (These are for the revised version of the paper):\n1) Page 2, Parag 3, 3rd line from the end: fined-tuned -> fine-tuned\n2) Page 2, one para to the end, last line: assigned for -> assigned to\n3) Page 5, line 2, same as above\n4) Page 8, Section 5, Line 3: explore -> explored", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Towards the Limit of Network Quantization
["Yoojin Choi", "Mostafa El-Khamy", "Jungwon Lee"]
Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks. It reduces the number of distinct network parameter values by quantization in order to save the storage for them. In this paper, we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative relation of quantization errors to the neural network loss function and identify that the Hessian-weighted distortion measure is locally the right objective function for the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When optimal variable-length binary codes, e.g., Huffman codes, are employed for further compression, we derive that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information theory and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.
["Theory", "Deep learning"]
https://openreview.net/forum?id=rJ8uNptgl
https://openreview.net/pdf?id=rJ8uNptgl
https://openreview.net/forum?id=rJ8uNptgl&noteId=Syu1KV9rg
rkqq9Mime
BJh6Ztuxl
ICLR.cc/2017/conference/-/paper60/official/review
{"title": "Interesting analytic results on unsupervised sentence encoders", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper presents a set of experiments investigating what kinds of information are captured in common unsupervised approaches to sentence representation learning. The results are non-trivial and somewhat surprising. For example, they show that it is possible to reconstruct word order from bag of words representations, and they show that LSTM sentence autoencoders encode interpretable features even for randomly permuted nonsense sentences.\n\nEffective unsupervised sentence representation learning is an important and largely unsolved problem in NLP, and this kind of work seems like it should be straightforwardly helpful towards that end. In addition, the experimental paradigm presented here is likely more broadly applicable to a range of representation learning systems. Some of the results seem somewhat strange, but I see no major technical concerns, and think that that they are informative. I recommend acceptance.\n\nOne minor red flag: \n- The massive drop in CBOW performance in Figures 1b and 4b are not explained, and seem implausible enough to warrant serious further investigation. Can you be absolutely certain that those results would appear with a different codebase and different random seed implementing the same model? Fortunately, this point is largely orthogonal to the major results of the paper.\n\nTwo writing comments:\n- I agree that the results with word order and CBOW are surprising, but I think it's slightly misleading to say that CBOW is predictive of word order. It doesn't represent word order at all, but it's possible to probabilistically reconstruct word order from the information that it does encode.\n- Saying that \"LSTM auto-encoders are more effective at encoding word order than word content\" doesn't really make sense. These two quantities aren't comparable. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=BJh6Ztuxl
https://openreview.net/pdf?id=BJh6Ztuxl
https://openreview.net/forum?id=BJh6Ztuxl&noteId=rkqq9Mime
HkHqRoIEe
BJh6Ztuxl
ICLR.cc/2017/conference/-/paper60/official/review
{"title": "Review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "\nThe authors present a methodology for analyzing sentence embedding techniques by checking how much the embeddings preserve information about sentence length, word content, and word order. They examine several popular embedding methods including autoencoding LSTMs, averaged word vectors, and skip-thought vectors. The experiments are thorough and provide interesting insights into the representational power of common sentence embedding strategies, such as the fact that word ordering is surprisingly low-entropy conditioned on word content.\n\nExploring what sort of information is encoded in representation learning methods for NLP is an important and under-researched area. For example, the tide of word-embeddings research was mostly stemmed after a thread of careful experimental results showing most embeddings to be essentially equivalent, culminating in \"Improving Distributional Similarity with Lessons Learned from Word Embeddings\" by Levy, Goldberg, and Dagan. As representation learning becomes even more important in NLP this sort of research will be even more important.\n\nWhile this paper makes a valuable contribution in setting out and exploring a methodology for evaluating sentence embeddings, the evaluations themselves are quite simple and do not necessarily correlate with real-world desiderata for sentence embeddings (as the authors note in other comments, performance on these tasks is not a normative measure of embedding quality). For example, as the authors note, the ability of the averaged vector to encode sentence length is trivially to be expected given the central limit theorem (or more accurately, concentration inequalities like Hoeffding's inequality).\n\nThe word-order experiments were interesting. A relevant citation for this sort of conditional ordering procedure is \"Generating Text with Recurrent Neural Networks\" by Sutskever, Martens, and Hinton, who refer to the conversion of a bag of words into a sentence as \"debagging.\"\n\nAlthough this is just a first step in better understanding of sentence embeddings, it is an important one and I recommend this paper for publication.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=BJh6Ztuxl
https://openreview.net/pdf?id=BJh6Ztuxl
https://openreview.net/forum?id=BJh6Ztuxl&noteId=HkHqRoIEe
H1rEX6WNl
BJh6Ztuxl
ICLR.cc/2017/conference/-/paper60/official/review
{"title": "Experimental analysis of unsupervised sentence embeddings", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper analyzes various unsupervised sentence embedding approaches by means of a set of auxiliary prediction tasks. By examining how well classifiers can predict word order, word content, and sentence length, the authors aim to assess how much and what type of information is captured by the different embedding models. The main focus is on a comparison between and encoder-decoder model (ED) and a permutation-invariant model, CBOW. (There is also an analysis of skip-thought vectors, but since it was trained on a different corpus it is hard to compare).\n\nThere are several interesting and perhaps counter-intuitive results that emerge from this analysis and the authors do a nice job of examining those results and, for the most part, explaining them. However, I found the discussion of the word-order experiment rather unsatisfying. It seems to me that the appropriate question should have been something like, 'How well does model X do compared to the theoretical upper bound which can be deduced from natural language statistics?' This is investigated from one angle in Section 7, but I would have preferred to the effect of natural language statistics discussed up front rather than presented as the explanation to a 'surprising' observation. I had a similar reaction to the word-order experiments.\n\nMost of the interesting results, in my opinion, are about the ED model. It is fascinating that the LSTM encoder does not seem to rely on natural-language ordering statistics -- it seems like doing so should be a big win in terms of per-parameter expressivity. I also think that it's strange that word content accuracy begins to drop for high-dimensional embeddings. I suppose this could be investigated by handicapping the decoder.\n\nOverall, this is a very nice paper investigating some aspects of the information content stored in various types of sentence embeddings. I recommend acceptance.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
["Yossi Adi", "Einat Kermany", "Yonatan Belinkov", "Ofer Lavi", "Yoav Goldberg"]
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector’s dimensionality on the resulting representations.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=BJh6Ztuxl
https://openreview.net/pdf?id=BJh6Ztuxl
https://openreview.net/forum?id=BJh6Ztuxl&noteId=H1rEX6WNl
Hys6iL27x
HyFkG45gl
ICLR.cc/2017/conference/-/paper209/official/review
{"title": "A nice approach to this problem, but inputs seem too artificial", "rating": "5: Marginally below acceptance threshold", "review": "The paper uses neural networks to answer falling body physics questions by 1. Resolving the parameters of the problem, and 2. Figure out which quantity is in question, compute it using a numerical integrator and return it as an answer.\nLearning and inference are performed on artificially generated questions using a probabilistic grammar.\nOverall, the paper is clearly written and seems to be novel in its approach.\n\nThe main problems I see with this work are:\n1. The task is artificial, and it's not clear how hard it is. The authors provide no baseline nor do they compare it to any real world problem. Without some measure of difficulty it's hard to tell if a much simple approach will do better, or if the task even makes sense.\n2. The labler LSTM uses only 10 hidden units. This is remarkably small for language modeling problems, and makes one further wonder about the difficulty of the task. The authors provide no reasoning for this choice.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Machine Solver for Physics Word Problems
["Megan Leszczynski", "Jose Moreira"]
We build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%.
["problem", "machine solver", "question part", "solver", "numerical integrator", "labeler", "classifier", "question", "validation"]
https://openreview.net/forum?id=HyFkG45gl
https://openreview.net/pdf?id=HyFkG45gl
https://openreview.net/forum?id=HyFkG45gl&noteId=Hys6iL27x
Sk0Aqs-Vg
HyFkG45gl
ICLR.cc/2017/conference/-/paper209/official/review
{"title": "An interesting paper to read but could be made better", "rating": "4: Ok but not good enough - rejection", "review": "This paper build a language-based solver for simple physics problems (a free falling object under constant velocity). Given a natural language query sampled from a fixed grammar, the system uses two LSTM models to extract key components, e.g., physical parameters and the type of questions being asked, which are then sent to a numerical integrator for the answer. The overall performance in the test set is almost perfect (99.8%).\n\nOverall I found this paper quite interesting to read (and it is well written). However, it is not clear how hard the problem is and how much this approach could generalize over more realistic (and complicated) situations. The dataset are a bit small and might not cover the query space. It might be better to ask AMT workers to come up with more complicated queries/answers. The physics itself is also quite easy. What happens if we apply the same idea on billiards? In this case, even we have a perfect physics simulator, the question to be asked could be very deep and requires multi-hop reasoning.\n\nFinally, given the same problem setting (physics solver), in my opinion, a more interesting direction is to study how DNN can take the place of numerical integrator and gives rough answers to the question (i.e., intuitive physics). It is a bit disappointing to see that DNN is only used to extract the parameters while still a traditional approach is used for core reasoning part. It would be more interesting to see the other way round.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Machine Solver for Physics Word Problems
["Megan Leszczynski", "Jose Moreira"]
We build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%.
["problem", "machine solver", "question part", "solver", "numerical integrator", "labeler", "classifier", "question", "validation"]
https://openreview.net/forum?id=HyFkG45gl
https://openreview.net/pdf?id=HyFkG45gl
https://openreview.net/forum?id=HyFkG45gl&noteId=Sk0Aqs-Vg
BJDe0lzNl
HyFkG45gl
ICLR.cc/2017/conference/-/paper209/official/review
{"title": "Rich data generation procedure but system specific and not well motivated", "rating": "4: Ok but not good enough - rejection", "review": "The authors describe a system for solving physics word problems. The system consists of two neural networks: a labeler and a classifier, followed by a numerical integrator. On the dataset that the authors synthesize, the full system attains near full performance. Outside of the pipeline, the authors also provide some network activation visualizations.\n\nThe paper is clear, and the data generation procedure/grammar is rich and interesting. However, overall the system is not well motivated. Why did they consider this particular problem domain, and what challenges did they specifically hope to address? Is it the ability to label sequences using LSTM networks, or the ability to classify what is being asked for in the question? This has already been illustrated, for example, by work on POS tagging and by memory networks for the babi tasks. A couple of standard architectural modifications, i.e. bi-directionality and a content-based attention mechanism, were also not considered.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Machine Solver for Physics Word Problems
["Megan Leszczynski", "Jose Moreira"]
We build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%.
["problem", "machine solver", "question part", "solver", "numerical integrator", "labeler", "classifier", "question", "validation"]
https://openreview.net/forum?id=HyFkG45gl
https://openreview.net/pdf?id=HyFkG45gl
https://openreview.net/forum?id=HyFkG45gl&noteId=BJDe0lzNl
H1th_uZNg
HJTzHtqee
ICLR.cc/2017/conference/-/paper481/official/review
{"title": "A solid empirical study", "rating": "7: Good paper, accept", "review": "This paper proposes a compare-aggregate framework that performs word-level matching followed by aggregation with convolutional neural networks. It compares six different comparison functions and evaluates them on four datasets. Extensive experimental results have been reported and compared against various published baselines.\n\nThe paper is well written overall.\n\nA few detailed comments:\n* page 4, line5: including a some -> including some\n* What's the benefit of the preprocessing and attention step? Can you provide the results without it?\n* Figure 2 is hard to read, esp. when on printed hard copy. Please enhance the quality.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
A Compare-Aggregate Model for Matching Text Sequences
["Shuohang Wang", "Jing Jiang"]
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general "compare-aggregate" framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=HJTzHtqee
https://openreview.net/pdf?id=HJTzHtqee
https://openreview.net/forum?id=HJTzHtqee&noteId=H1th_uZNg
ryR1LZoQe
HJTzHtqee
ICLR.cc/2017/conference/-/paper481/official/review
{"title": "Effective model design, great evaluation", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper presents a general approach to modeling for natural language understanding problems with two distinct textual inputs (such as a question and a source text) that can be aligned in some way. In the approach, soft attention is first used to derive alignments between the tokens of the two texts, then a comparison function uses the resulting alignments (represented as pairs of attention queries and attention results) to derive a representations that are aggregated by CNN into a single vector from which an output can be computed. The paper both presents this as an overall modeling strategy that can be made to work quite well, and offers a detailed empirical analysis of the comparison component of the model.\n\nThis work is timely. Language understanding problems of this kind are a major open issue in NLP, and are just at the threshold of being addressable with representation learning methods. The work presents a general approach which is straightforward and reasonable, and shows that it can yield good results. The work borders on incremental (relative to their earlier work or that of Parikh et al.), but it contributes in enough substantial ways that I'd strongly recommend acceptance.\n\nDetail: \n- The model, at least as implemented for the problems with longer sequences (everything but SNLI), is not sensitive to word order. It is empirically competitive, but this insensitivity places a strong upper bound on its performance. The paper does make this clear, but it seems salient enough to warrant a brief mention in the introduction or discussion sections.\n- If I understand correctly, your attention strategy is based more closely on the general/bilinear strategy of Luong et al. '15 than it is on the earlier Bahdanau work. You should probably cite the former (or some other more directly relevant reference for that strategy).\n- Since the NTN risks overfitting because of its large number of parameters, did you try using a version with input dimension l and a smaller output dimension m (so an l*l*m tensor)?\n- You should probably note that SubMultNN looks a lot like the strategy for *sentence*-level matching in the Lili Mou paper you cite.\n- Is there a reason you use the same parameters for preprocessing the question and answer in (1)? These could require different things to be weighted highly.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
A Compare-Aggregate Model for Matching Text Sequences
["Shuohang Wang", "Jing Jiang"]
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general "compare-aggregate" framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=HJTzHtqee
https://openreview.net/pdf?id=HJTzHtqee
https://openreview.net/forum?id=HJTzHtqee&noteId=ryR1LZoQe
H1rX01G4e
HJTzHtqee
ICLR.cc/2017/conference/-/paper481/official/review
{"title": "Official Review", "rating": "6: Marginally above acceptance threshold", "review": "This paper proposed a compare-aggregate model for the NLP tasks that require semantically comparing the text sequences, such as question answering and textual entailment.\nThe basic framework of this model is to apply a convolutional neural network (aggregation) after a element-wise operation (comparison) over the attentive outputs of the LSTMs. \nThe highlighted part is the comparison, where this paper compares several different methods for matching text sequences, and the element-wise subtraction/multiplication operations are demonstrated to achieve generally better performance on four different datasets.\nWhile the weak point is that this is an incremental work and a bit lack of innovation. A qualitative evaluation about how subtraction, multiplication and other comparison functions perform on varied kinds of sentences would be more interesting. \n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
A Compare-Aggregate Model for Matching Text Sequences
["Shuohang Wang", "Jing Jiang"]
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general "compare-aggregate" framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=HJTzHtqee
https://openreview.net/pdf?id=HJTzHtqee
https://openreview.net/forum?id=HJTzHtqee&noteId=H1rX01G4e
SyF1qboVg
ry18Ww5ee
ICLR.cc/2017/conference/-/paper359/official/review
{"title": "interesting extension to successive halving, still looking forward to the parallel asynchronous version", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This was an interesting paper. The algorithm seems clear, the problem well-recognized, and the results are both strong and plausible.\n\nApproaches to hyperparameter optimization based on SMBO have struggled to make good use of convergence during training, and this paper presents a fresh look at a non-SMBO alternative (at least I thought it did, until one of the other reviewers pointed out how much overlap there is with the previously published successive halving algorithm - too bad!). Still, I'm excited to try it. I'm cautiously optimistic that this simple alternative to SMBO may be the first advance to model search for the skeptical practitioner since the case for random search > grid search (http://www.jmlr.org/papers/v13/bergstra12a.html, which this paper should probably cite in connection with their random search baseline.)\n\nI would suggest that the authors remove the (incorrect?) claim that this algorithm is \"embarrassingly parallel\" as it seems that there are number of synchronization barriers at which state must be shared in order to make the go-no-go decisions on whatever training runs are still in progress. As the authors themselves point out as future work, there are interesting questions around how to adapt this algorithm to make optimal use of a cluster (I'm optimistic that it should carry over, but it's not trivial).\n\nFor future work, the authors might be interested in Hutter et al's work on Bayesian Optimization With Censored Response Data (https://arxiv.org/abs/1310.1947) for some ideas about how to use the dis-continued runs.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization
["Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar"]
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian Optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present Hyperband, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. Hyperband is a principled early-stoppping method that adaptively allocates a predefined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare Hyperband with state-of-the-art Bayesian Optimization methods on several hyperparameter optimization problems. We observe that Hyperband can provide over an order of magnitude speedups over competitors on a variety of neural network and kernel-based learning problems.
["hyperband", "configuration evaluation", "hyperparameter optimization hyperband", "hyperparameter optimization performance", "machine", "algorithms depends", "good set", "hyperparameters", "recent approaches", "bayesian optimization"]
https://openreview.net/forum?id=ry18Ww5ee
https://openreview.net/pdf?id=ry18Ww5ee
https://openreview.net/forum?id=ry18Ww5ee&noteId=SyF1qboVg
HyOACaZEx
ry18Ww5ee
ICLR.cc/2017/conference/-/paper359/official/review
{"title": "Good extension of successive halving and random search", "rating": "7: Good paper, accept", "review": "This paper presents Hyperband, a method for hyperparameter optimization where the model is trained by gradient descent or some other iterative scheme. The paper builds on the successive halving + random search approach of Jamieson and Talwalkar and addresses the tradeoff between training fewer models for a longer amount of time, or many models for a shorter amount of time. Effectively, the idea is to perform multiple rounds of successive halving, starting from the most exploratory setting, and then in each round exponentially decreasing the number of experiments, but granting them exponentially more resources. In contrast to other recent papers on this topic, the approach here does not rely on any specific model of the underlying learning curves and therefore makes fewer assumptions about the nature of the model. The results seem to show that this approach can be highly effective, often providing several factors of speedup over sequential approaches.\n\nOverall I think this paper is a good contribution to the hyperparameter optimization literature. It\u2019s relatively simple to implement, and seems to be quite effective for many problems. It seems like a natural extension of the random search methodology to the case of early stopping. To me, it seems like Hyperband would be most useful on problems where a) random search itself is expected to perform well and b) the computational budget is sufficiently constrained so that squeezing out the absolute best performance is not feasible and near-optimal performance is sufficient. I would personally like to see the plots in Figure 3 run out far enough that the other methods have had time to converge in order to see what this gap between optimal and near-optimal really is (if there is one).\n\nI\u2019m not sure I agree with the use of random2x as a baseline. I can see why it\u2019s a useful comparison because it demonstrates the benefit of parallelism over sequential methods, but virtually all of these other methods also have parallel extensions. I think if random2x is shown, then I would also like to see SMAC2x, Spearmint2x, TPE2x, etc. I also think it would be worth seeing 3x, 10x, and so forth and how Hyperband fares against these baselines.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization
["Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar"]
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian Optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present Hyperband, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. Hyperband is a principled early-stoppping method that adaptively allocates a predefined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare Hyperband with state-of-the-art Bayesian Optimization methods on several hyperparameter optimization problems. We observe that Hyperband can provide over an order of magnitude speedups over competitors on a variety of neural network and kernel-based learning problems.
["hyperband", "configuration evaluation", "hyperparameter optimization hyperband", "hyperparameter optimization performance", "machine", "algorithms depends", "good set", "hyperparameters", "recent approaches", "bayesian optimization"]
https://openreview.net/forum?id=ry18Ww5ee
https://openreview.net/pdf?id=ry18Ww5ee
https://openreview.net/forum?id=ry18Ww5ee&noteId=HyOACaZEx
ryVZxPfVe
ry18Ww5ee
ICLR.cc/2017/conference/-/paper359/official/review
{"title": "A nice paper, just needs to relate to the existing literature better", "rating": "7: Good paper, accept", "review": "This paper discusses Hyperband, an extension of successive halving by Jamieson & Talwalkar (AISTATS 2016). Successive halving is a very nice algorithm that starts evaluating many configurations and repeatedly cuts off the current worst half to explore many configuration for a limited budget.\n\nHaving read the paper for the question period and just rereading it again, I am now not entirely sure what its contribution is meant to be: the only improvement of Hyperband vs. successive halving is in the theoretical worst case bounds (not more than 5x worse than random search), but you can (a) trivially obtain that bound by using a fifth of your time for running random configurations to completion and (b) the theoretical analysis to show this is said to be beyond the scope of the paper. That makes me wonder whether the theoretical results are the contribution of this paper, or whether they are the subject of a different paper and the current paper is mostly an empirical study of the method?\nI hope to get a response by the authors and see this made clearer in an updated version of the paper.\n\nIn terms of experiments, the paper fails to show a case where Hyperband actually performs better than the authors' previous algorithm successive halving with its most agressive setting of bracket b=4. Literally, in every figure, bracket b=4 is at least as good (and sometimes substantially better) than Hyperband. That makes me think that in practice I would prefer successive halving with b=4 over Hyperband. (And if I really want Hyperband's guarantee of not being more than 5x worse than random search I can run random search on a fifth of my machines.) \nThe experiments also compare to some Bayesian optimization methods, but not to the most relevant very closely related Multi-Task Bayesian Optimization methods that have been dominating effective methods for deep learning in that area in the last 3 years: \"Multi-Task Bayesian Optimization\" by Swersky, Snoek, and Adams (2013) already showed 5x speedups for deep learning by starting with smaller datasets, and there have been several follow-up papers showing even larger speedups. \n\nGiven that this prominent work on multitask Bayesian optimization exists, I also think the introduction, which sells Hyperband as a very new approach to hyperparameter optimization is misleading. I would've much preferred a more down-to-earth pitch that says \"configuration evaluation\" has been becoming a very important feature in hyperparameter optimization, including Bayesian optimization, that sometimes yields very large speedups (this can be quantified by examples from existing papers) and this paper adds some much-needed theoretical understanding to this and demonstrates how important configuration evaluation is even in the simplest case of being used with random search. I think this could be done easily and locally by adding a paragraph to the intro.\n\nAs another point regarding novelty, I think the authors should make clear that approaches for adaptively deciding how many resources to use for which evaluation have been studied for (at least) 23 years in the ML community -- see Maron & Moore, NIPS 1993: \"Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation\" (https://papers.nips.cc/paper/841-hoeffding-races-accelerating-model-selection-search-for-classification-and-function-approximation). Again, this could be done by a paragraph in the intro. \n\nOverall, I think for this paper having the related work section at the end leads to many concepts appearing to be new in the paper that turn out not to be new in the end, which is a bit of a let-down. I encourage the authors to prominently discuss related work, including the recent trends in Bayesian optimization towards configuration evaluation, in the beginning, and then clearly state the contribution of this paper by positioning it in the context of that related work and saying what exactly is new. (I think the answer is \"very simple method\", \"great empirical results for several deep learning tasks\" and \"much-needed new theoretical results\", which is a very nice contribution.) I'm giving an accepting score trusting that the authors will follow this suggestion.\n\n\nI have some responses to some of the author responses:\n\n1) \"In response to your question, we ran an experiment modeled after the empirical studies in Krueger et al tuning 2 hyperparameters of a kernel SVM to compare CVST (Krueger et al 2015) and Hyperband. Hyperband is 3-4x faster than CVST on this experiment and the two achieve similar test performance. Notably, CVST was only 50% faster than standard holdout. For the experiments in our paper, we excluded CVST due to the aforementioned theoretical differences and because CVST is not an anytime algorithm, but as we perform more experiments, we will update the draft to reflect this comparison.\"\n\nGreat, I am looking forward to seeing the details on these experiments before the decision phase.\n\n2) \"Hyperband makes no assumptions on the shape or rate of convergence of the validation error, just that it eventually converges.\"\n\nIt's only the worst-case analysis that makes no assumption, but of course one would not be happy with that worst-case performance of being 5x worse than random search. (The 5x is what the authors call \"modestly worse, by a log factor\"; it's the logarithm of the dataset size or of the number of epochs, both of which tend to be large numbers). I think this number of 5x should be stated explicitly somewhere for the authors choice of Hyperband parameters. (E.g., at the beginning of the experiments, when Hyperband's parameters are stated.)\n\n3) \"Like random search, it is also embarrassingly parallel.\"\n\nI think this is not quite correct. Let's say I want to tune hyperparameters on ImageNet and each hyperparameter evaluation takes 1 week, but I have 100 GPUs, then random search will give a decent solution (the best of 100 random configurations) after 1 week. However, Hyperband will require 5 weeks before it will give any solution. Again, the modest log factor is a factor of 5. To me, \"embarassingly parallel\" would mean making great predictions after a week if you throw enough resources at it.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization
["Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar"]
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian Optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present Hyperband, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. Hyperband is a principled early-stoppping method that adaptively allocates a predefined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare Hyperband with state-of-the-art Bayesian Optimization methods on several hyperparameter optimization problems. We observe that Hyperband can provide over an order of magnitude speedups over competitors on a variety of neural network and kernel-based learning problems.
["hyperband", "configuration evaluation", "hyperparameter optimization hyperband", "hyperparameter optimization performance", "machine", "algorithms depends", "good set", "hyperparameters", "recent approaches", "bayesian optimization"]
https://openreview.net/forum?id=ry18Ww5ee
https://openreview.net/pdf?id=ry18Ww5ee
https://openreview.net/forum?id=ry18Ww5ee&noteId=ryVZxPfVe
SydvmezVe
SkXIrV9le
ICLR.cc/2017/conference/-/paper213/official/review
{"title": ".", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposes a generative model of videos composed of a background and a set of 2D objects (sprites). Optimization is performed under a VAE framework.\n\nThe authors' proposal of an outer product of softmaxed vectors (resulting in a 2D map that is delta-like), composed with a convolution, is a very interesting way to achieve translation of an image with differentiable parameters. It seems to be an attractive alternative to more complicated differentiable resamplers (such as those used by STNs) when only translation is needed.\n\nBelow I have made some comments regarding parts of the text, especially the experiments, that are not clear. The experimental section in particular seems rushed, with some results only alluded to but not given, not even in the appendix.\n\nFor an extremely novel and exotic proposal, showing only synthetic experiments could be excused. However, though there is some novelty in the method, it is disappointing that there isn't even an attempt at trying to tackle a problem with real data.\n\nI suggest as an example aerial videos (such as those taken from drone platforms), since the planar assumption that the authors make would most probably hold in that case.\n\nI also suggest that the authors do another pass at proof-reading the paper. There are missing references (\"Fig. ??\"), unfinished sentences (caption of Fig. 5), and the aforementioned issues with the experimental exposition.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Perception Updating Networks: On architectural constraints for interpretable video generative models
["Eder Santana", "Jose C Principe"]
We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents "sprites" or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.
["Structured prediction", "Unsupervised Learning"]
https://openreview.net/forum?id=SkXIrV9le
https://openreview.net/pdf?id=SkXIrV9le
https://openreview.net/forum?id=SkXIrV9le&noteId=SydvmezVe
HJUQPFZNl
SkXIrV9le
ICLR.cc/2017/conference/-/paper213/official/review
{"title": "Mostly incremental generative model of video data with preliminary experimental results", "rating": "4: Ok but not good enough - rejection", "review": "This paper presents a generative model of video sequence data where the frames are assumed to be generated by a static background with a 2d sprite composited onto it at each timestep. The sprite itself is allowed to dynamically change its appearance and location within the image from frame to frame. This paper follows the VAE (Variational Autoencoder) approach, where a recognition/inference network allows them to recover the latent state at each timestep.\n\nSome results are presented on simple synthetic data (such as a moving rectangle on a black background or the \u201cMoving MNIST\u201d data. However, the results are preliminary and I suspect that the assumptions used in the paper are far too strong too be useful in real videos. On the Moving MNIST data, the numerical results are not competitive to state of the art numbers.\n\nThe model itself is also not particularly novel and the work currently misses some relevant citations. The form of the forward model, for example, could be viewed as a variation on the DRAW paper by Gregor et al (ICML 2014). Efficient Inference in Occlusion-Aware Generative Models of Images by Huang & Murphy (ICLR) is another relevant work, which used a variational auto-encoder with a spatial transformer and an RNN-like sequence model to model the appearance of multiple sprites on a background.\n\nFinally, the exposition in this paper is short on many details and I don\u2019t believe that the paper is reproducible from the text alone. For example, it is not clear what the form of the recognition model is\u2026 Low-level details (which are very important) are also not presented, such as initialization strategy.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Perception Updating Networks: On architectural constraints for interpretable video generative models
["Eder Santana", "Jose C Principe"]
We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents "sprites" or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.
["Structured prediction", "Unsupervised Learning"]
https://openreview.net/forum?id=SkXIrV9le
https://openreview.net/pdf?id=SkXIrV9le
https://openreview.net/forum?id=SkXIrV9le&noteId=HJUQPFZNl
ByfxSqbNx
SkXIrV9le
ICLR.cc/2017/conference/-/paper213/official/review
{"title": "Experimental results are too preliminary", "rating": "4: Ok but not good enough - rejection", "review": "This paper presents an approach to modeling videos based on a decomposition into a background + 2d sprites with a latent hidden state. The exposition is OK, and I think the approach is sensible, but the main issue with this paper is that it is lacking experiments on non-synthetic datasets. As such, while I find the graphics inspired questions the paper is investigating interesting, I don't think it is clear that this work introduces useful machinery for modeling more general videos.\n\nI think this paper is more appropriate as a workshop contribution in its current form.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Perception Updating Networks: On architectural constraints for interpretable video generative models
["Eder Santana", "Jose C Principe"]
We investigate a neural network architecture and statistical framework that models frames in videos using principles inspired by computer graphics pipelines. The proposed model explicitly represents "sprites" or its percepts inferred from maximum likelihood of the scene and infers its movement independently of its content. We impose architectural constraints that forces resulting architecture to behave as a recurrent what-where prediction network.
["Structured prediction", "Unsupervised Learning"]
https://openreview.net/forum?id=SkXIrV9le
https://openreview.net/pdf?id=SkXIrV9le
https://openreview.net/forum?id=SkXIrV9le&noteId=ByfxSqbNx
BkjpniLEg
B184E5qee
ICLR.cc/2017/conference/-/paper534/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "The authors present a simple method to affix a cache to neural language models, which provides in effect a copying mechanism from recently used words. Unlike much related work in neural networks with copying mechanisms, this mechanism need not be trained with long-term backpropagation, which makes it efficient and scalable to much larger cache sizes. They demonstrate good improvements on language modeling by adding this cache to RNN baselines.\n\nThe main contribution of this paper is the observation that simply using the hidden states h_i as keys for words x_i, and h_t as the query vector, naturally gives a lookup mechanism that works fine without tuning by backprop. This is a simple observation and might already exist as folk knowledge among some people, but it has nice implications for scalability and the experiments are convincing.\n\nThe basic idea of repurposing locally-learned representations for large-scale attention where backprop would normally be prohibitively expensive is an interesting one, and could probably be used to improve other types of memory networks.\n\nMy main criticism of this work is its simplicity and incrementality when compared to previously existing literature. As a simple modification of existing NLP models, but with good empirical success, simplicity and practicality, it is probably more suitable for an NLP-specific conference. However, I think that approaches that distill recent work into a simple, efficient, applicable form should be rewarded and that this tool will be useful to a large enough portion of the ICLR community to recommend its publication.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Improving Neural Language Models with a Continuous Cache
["Edouard Grave", "Armand Joulin", "Nicolas Usunier"]
We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.
["Natural language processing"]
https://openreview.net/forum?id=B184E5qee
https://openreview.net/pdf?id=B184E5qee
https://openreview.net/forum?id=B184E5qee&noteId=BkjpniLEg
H1YGZUMNx
B184E5qee
ICLR.cc/2017/conference/-/paper534/official/review
{"title": "Review", "rating": "9: Top 15% of accepted papers, strong accept", "review": "This paper not only shows that a cache model on top of a pre-trained RNN can improve language modeling, but also illustrates a shortcoming of standard RNN models in that they are unable to capture this information themselves. Regardless of whether this is due to the small BPTT window (35 is standard) or an issue with the capability of the RNN itself, this is a useful insight. This technique is an interesting variation of memory augmented neural networks with a number of advantages to many of the standard memory augmented architectures.\n\nThey illustrate the neural cache model on not just the Penn Treebank but also WikiText-2 and WikiText-103, two datasets specifically tailored to illustrating long term dependencies with a more realistic vocabulary size. I have not seen the ability to refer up to 2000 words back previously.\nI recommend this paper be accepted. There is additionally extensive analysis of the hyperparameters on these datasets, providing further insight.\n\nI recommend this interesting and well analyzed paper be accepted.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Improving Neural Language Models with a Continuous Cache
["Edouard Grave", "Armand Joulin", "Nicolas Usunier"]
We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.
["Natural language processing"]
https://openreview.net/forum?id=B184E5qee
https://openreview.net/pdf?id=B184E5qee
https://openreview.net/forum?id=B184E5qee&noteId=H1YGZUMNx
SJv48zBNl
B184E5qee
ICLR.cc/2017/conference/-/paper534/official/review
{"title": "review", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a simple extension to a neural network language model by adding a cache component. \nThe model stores <previous hidden state, word> pairs in memory cells and uses the current hidden state to control the lookup. \nThe final probability of a word is a linear interpolation between a standard language model and the cache language model. \nAdditionally, an alternative that uses global normalization instead of linear interpolation is also presented. \nExperiments on PTB, Wikitext, and LAMBADA datasets show that the cache model improves over standard LSTM language model.\n\nThere is a lot of similar work on memory-augmented/pointer neural language models, and the main difference is that the proposed method is simple and scales to a large cache size.\nHowever, since the technical contribution is rather limited, the experiments need to be more thorough and conclusive. \nWhile it is obvious from the results that adding a cache component improves over language models without memory, it is still unclear that this is the best way to do it (instead of, e.g., using pointer networks). \nA side-by-side comparison of models with pointer networks vs. models with cache with roughly the same number of parameters is needed to convincingly argue that the proposed method is a better alternative (either because it achieves lower perplexity, faster to train but similar test perplexity, faster at test time, etc.)\n\nSome questions:\n- In the experiment results, for your neural cache model, are those results with linear interpolation or global normalization, or the best model? Can you show results for both? \n- Why is the neural cache model worse than LSTM on Ctrl (Lambada dataset)? Please also show accuracy on this dataset. \n- It is also interesting that the authors mentioned that training the cache component instead of only using it at test time gives little improvements. Are the results about the same or worse?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Improving Neural Language Models with a Continuous Cache
["Edouard Grave", "Armand Joulin", "Nicolas Usunier"]
We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.
["Natural language processing"]
https://openreview.net/forum?id=B184E5qee
https://openreview.net/pdf?id=B184E5qee
https://openreview.net/forum?id=B184E5qee&noteId=SJv48zBNl
rkrSDoZ4e
ryHlUtqge
ICLR.cc/2017/conference/-/paper492/official/review
{"title": "Review", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper proposes to study the problem of semi-supervised RL where one has to distinguish between labelled MDPs that provide rewards, and unlabelled MDPs that are not associated with any reward signal. The underlying is very simple since it aims at simultaneously learning a policy based on the REINFORCE+entropy regularization technique, and also a model of the reward that will be used (as in inverse reinforcement learning) as a feedback over unlabelled MDPs. The experiments are made on different continous domains and show interesting results\n\nThe paper is well written, and easy to understand. It is based on a simple but efficient idea of simultaneously learning the policy and a model of the reward and the resulting algorithm exhibit interesting properties. The proposed idea is quite obvious, but the authors are the first ones to propose to test such a model. The experiments could be made stronger by mixing continuous and discrete problems but are convincing. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Generalizing Skills with Semi-Supervised Reinforcement Learning
["Chelsea Finn", "Tianhe Yu", "Justin Fu", "Pieter Abbeel", "Sergey Levine"]
Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.
["Reinforcement Learning"]
https://openreview.net/forum?id=ryHlUtqge
https://openreview.net/pdf?id=ryHlUtqge
https://openreview.net/forum?id=ryHlUtqge&noteId=rkrSDoZ4e
BknFD3GVg
ryHlUtqge
ICLR.cc/2017/conference/-/paper492/official/review
{"title": "An approach to semi supervised RL using inverse RL ", "rating": "6: Marginally above acceptance threshold", "review": "In supervised learning, a significant advance occurred when the framework of semi-supervised learning was adopted, which used the weaker approach of unsupervised learning to infer some property, such as a distance measure or a smoothness regularizer, which could then be used with a small number of labeled examples. The approach rested on the assumption of smoothness on the manifold, typically. \n\nThis paper attempts to stretch this analogy to reinforcement learning, although the analogy is somewhat incoherent. Labels are not equivalent to reward functions, and positive or negative rewards do not mean the same as positive and negative labels. Still, the paper makes a worthwhile attempt to explore this notion of semi-supervised RL, which is clearly an important area that deserves more attention. The authors use the term \"labeled MDP\" to mean the typical MDP framework where the reward function is unknown. They use the confusing term \"unlabeled MDP\" to mean the situation where the reward is unknown, which is technically not an MDP (but a controlled Markov process). \n\nIn the classical RL transfer learning setup, the agent is attempting to transfer learning from a source \"labeled\" MDP to a target \"labeled\" MDP (where both reward functions are known, but the learned policy is known only in the source MDP). In the semi-supervised RL setting, the target is an \"unlabeled\" CMP, and the source is both a \"labeled\" MDP and an \"unlabeled\" CMP. The basic approach is to use inverse RL to infer the unknown \"labels\" and then attempt to construct transfer. A further restriction is made to linearly solvable MDPs for technical reasons. Experiments are reported using three relatively complex domains using the Mujoco physics simulator. \n\nThe work is interesting, but in the opinion of this reviewer, the work fails to provide a simple sufficiently general notion of semi-supervised RL that will be of sufficiently wide interest to the RL community. That remains to be done by a future paper, but in the interim, the work here is sufficiently interesting and the problem is certainly a worthwhile one to study. ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Generalizing Skills with Semi-Supervised Reinforcement Learning
["Chelsea Finn", "Tianhe Yu", "Justin Fu", "Pieter Abbeel", "Sergey Levine"]
Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.
["Reinforcement Learning"]
https://openreview.net/forum?id=ryHlUtqge
https://openreview.net/pdf?id=ryHlUtqge
https://openreview.net/forum?id=ryHlUtqge&noteId=BknFD3GVg
B13U25zEx
ryHlUtqge
ICLR.cc/2017/conference/-/paper492/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "This paper formalizes the problem setting of having only a subset of available MDPs for which one has access to a reward. The authors name this setting \"semi-supervised reinforcement learning\" (SSRL), as a reference to semi-supervised learning (where one only has access to labels for a subset of the dataset). They provide an approach for solving SSRL named semi-supervised skill generalization (S3G), which builds on the framework of maximum entropy control. The whole approach is straightforward and amounts to an EM algorithm with partial labels (: they alternate iteratively between estimating a reward function (parametrized) and fitting a control policy using this reward function. They provide experiments on 4 tasks (obstacle, 2-link reacher, 2-link reacher with vision, half-cheetah) in MuJoCo.\n\nThe paper is well-written, and is overall clear. The appendix provides some more context, I think a few implementation details are missing to be able to fully reproduce the experiments from the paper, but they will provide the code.\n\nThe link to inverse reinforcement learning seems to be done correctly. However, there is no reference to off-policy policy learning, and, for instance, it seems to me that the \\tau \\in D_{samp} term of equation (3) could benefit from variance reduction as in e.g. TB(\\lambda) [Precup et al. 2000] or Retrace(\\lambda) [Munos et al. 2016].\n\nThe experimental section is convincing, but I would appreciate a precision (and small discussion) of this sentence \"To extensively test the generalization capabilities of the policies learned with each method, we measure performance on a wide range of settings that is a superset of the unlabeled and labeled MDPs\" with numbers for the different scenarios (or the replacement of superset by \"union\" if this is the case). It may explain better the poor results of \"oracle\" on \"obstacle\" and \"2-link reacher\", and reinforce* the further sentences \"In the obstacle task, the true reward function is not sufficiently shaped for learning in the unlabeled MDPs. Hence, the reward regression and oracle methods perform poorly\".\n\nCorrection on page 4: \"5-tuple M_i = (S, A, T, R)\" is a 4-tuple.\n\nOverall, I think that this is a good and sound paper. I am personally unsure as to if all the parallels and/or references to previous work are complete, thus my confidence score of 3.\n\n(* pun intended)", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Generalizing Skills with Semi-Supervised Reinforcement Learning
["Chelsea Finn", "Tianhe Yu", "Justin Fu", "Pieter Abbeel", "Sergey Levine"]
Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi-supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of “labeled” MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of “unlabeled” MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent’s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward.
["Reinforcement Learning"]
https://openreview.net/forum?id=ryHlUtqge
https://openreview.net/pdf?id=ryHlUtqge
https://openreview.net/forum?id=ryHlUtqge&noteId=B13U25zEx
rkgMSRKrx
SkkTMpjex
ICLR.cc/2017/conference/-/paper593/official/review
{"title": "Review - Distributed K-FAC", "rating": "7: Good paper, accept", "review": "In this paper, the authors present a partially asynchronous variant of the K-FAC method. The authors adapt/modify the K-FAC method in order to make it computationally tractable for optimizing deep neural networks. The method distributes the computation of the gradients and the other quantities required by the K-FAC method (2nd order statistics and Fisher Block inversion). The gradients are computed in synchronous manner by the \u2018gradient workers\u2019 and the quantities required by the K-FAC method are computed asynchronously by the \u2018stats workers\u2019 and \u2018additional workers\u2019. The method can be viewed as an augmented distributed Synchronous SGD method with additional computational nodes that update the approximate Fisher matrix and computes its inverse. The authors illustrate the performance of the method on the CIFAR-10 and ImageNet datasets using several models and compare with synchronous SGD.\n\nThe main contributions of the paper are:\n1) Distributed variant of K-FAC that is efficient for optimizing deep neural networks. The authors mitigate the computational bottlenecks of the method (second order statistic computation and Fisher Block inverses) by asynchronous updating.\n2) The authors propose a \u201cdoubly-factored\u201d Kronecker approximation for layers whose inputs are too large to be handled by the standard Kronecker-factored approximation. They also present (Appendix A) a cheaper Kronecker factored approximation for convolutional layers.\n3) Empirically illustrate the performance of the method, and show:\n- Asynchronous Fisher Block inversions do not adversely affect the performance of the method (CIFAR-10)\n- K-FAC is faster than Synchronous SGD (with and without BN, and with momentum) (ImageNet)\n- Doubly-factored K-FAC method does not deteriorate the performance of the method (ImageNet and ResNet)\n- Favorable scaling properties of K-FAC with mini-batch size\n\nPros:\n- Paper presents interesting ideas on how to make computationally demanding aspects of K-FAC tractable. \n- Experiments are well thought out and highlight the key advantages of the method over Synchronous SGD (with and without BN).\n\nCons: \n- \u201c\u2026it should be possible to scale our implementation to a larger distributed system with hundreds of workers.\u201d The authors mention that this should be possible, but fail to mention the potential issues with respect to communication, load balancing and node (worker) failure. That being said, as a proof-of-concept, the method seems to perform well and this is a good starting point.\n- Mini-batch size scaling experiments: the authors do not provide validation curves, which may be interesting for such an experiment. Keskar et. al. 2016 (On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima) provide empirical evidence that large-batch methods do not generalize as well as small batch methods. As a result, even if the method has favorable scaling properties (in terms of mini-batch sizes), this may not be effective.\n\nThe paper is clearly written and easy to read, and the authors do a good job of communicating the motivation and main ideas of the method. There are a few minor typos and grammatical errors. \n\nTypos:\n- \u201cupdates that accounts for\u201d \u2014 \u201cupdates that account for\u201d\n- \u201cKronecker product of their inverse\u201d \u2014 \u201cKronecker product of their inverses\u201d\n- \u201cwhere P is distribution over\u201d \u2014 \u201cwhere P is the distribution over\u201d\n- \u201cback-propagated loss derivativesas\u201d \u2014 \u201cback-propagated loss derivatives as\u201d\n- \u201cinverse of the Fisher\u201d \u2014 \u201cinverse of the Fisher Information matrix\u201d\n- \u201cwhich amounts of several matrix\u201d \u2014 \u201cwhich amounts to several matrix\u201d\n- \u201cThe diagram illustrate the distributed\u201d \u2014 \u201cThe diagram illustrates the distributed\u201d\n- \u201cGradient workers computes\u201d \u2014 \u201cGradient workers compute\u201d \n- \u201cStat workers computes\u201d \u2014 \u201cStat workers compute\u201d \n- \u201coccasionally and uses stale values\u201d \u2014 \u201coccasionally and using stale values\u201d \n- \u201cThe factors of rank-1 approximations\u201d \u2014 \u201cThe factors of the rank-1 approximations\u201d\n- \u201cbe the first singular value and its left and right singular vectors\u201d \u2014 \u201cbe the first singular value and the left and right singular vectors \u2026 , respectively.\u201d\n- \u201c\\Psi is captures\u201d \u2014 \u201c\\Psi captures\u201d\n- \u201cmultiplying the inverses of the each smaller matrices\u201d \u2014 \u201cmultiplying the inverses of each of the smaller matrices\u201d\n- \u201cwhich is a nested applications of the reshape\u201d \u2014 \u201cwhich is a nested application of the reshape\u201d\n- \u201cprovides a computational feasible alternative\u201d \u2014 \u201cprovides a computationally feasible alternative\u201d\n- \u201caccording the geometric mean\u201d \u2014 \u201caccording to the geometric mean\u201d\n- \u201canalogous to shrink\u201d \u2014 \u201canalogous to shrinking\u201d\n- \u201capplied to existing model-specification code\u201d \u2014 \u201capplied to the existing model-specification code\u201d\n- \u201c: that the alternative parametrization\u201d \u2014 \u201c: the alternative parameterization\u201d\n\nMinor Issues:\n- In paragraph 2 (Introduction) the authors mention several methods that approximate the curvature matrix. However, several methods that have been developed are not mentioned. For example:\n1) (AdaGrad) Adaptive Subgradient Methods for Online Learning and Stochastic Optimization (http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)\n2) Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization (https://arxiv.org/abs/1607.01231)\n3) adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs (http://link.springer.com/chapter/10.1007/978-3-319-46128-1_1)\n4) A Self-Correcting Variable-Metric Algorithm for Stochastic Optimization (http://jmlr.org/proceedings/papers/v48/curtis16.html)\n5) L-SR1: A Second Order Optimization Method for Deep Learning (https://openreview.net/pdf?id=By1snw5gl)\n- Page 2, equation s = WA, is there a dimension issue in this expression?\n- x-axis for top plots in Figures 3,4,5,7 (Updates x XXX) appear to be a headings for the lower plots.\n- \u201cJames Martens. Deep Learning via Hessian-Free Optimization\u201d appears twice in References section.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Distributed Second-Order Optimization using Kronecker-Factored Approximations
["Jimmy Ba", "Roger Grosse", "James Martens"]
As more computational resources become available, machine learning researchers train ever larger neural networks on millions of data points using stochastic gradient descent (SGD). Although SGD scales well in terms of both the size of dataset and the number of parameters of the model, it has rapidly diminishing returns as parallel computing resources increase. Second-order optimization methods have an affinity for well-estimated gradients and large mini-batches, and can therefore benefit much more from parallel computation in principle. Unfortunately, they often employ severe approximations to the curvature matrix in order to scale to large models with millions of parameters, limiting their effectiveness in practice versus well-tuned SGD with momentum. The recently proposed K-FAC method(Martens and Grosse, 2015) uses a stronger and more sophisticated curvature approximation, and has been shown to make much more per-iteration progress than SGD, while only introducing a modest overhead. In this paper, we develop a version of K-FAC that distributes the computation of gradients and additional quantities required by K-FAC across multiple machines, thereby taking advantage of method’s superior scaling to large mini-batches and mitigating its additional overheads. We provide a Tensorflow implementation of our approach which is easy to use and can be applied to many existing codebases without modification. Additionally, we develop several algorithmic enhancements to K-FAC which can improve its computational performance for very large models. Finally, we show that our distributed K-FAC method speeds up training of various state-of-the-art ImageNet classification models by a factor of two compared to Batch Normalization(Ioffe and Szegedy, 2015).
["Deep learning", "Optimization"]
https://openreview.net/forum?id=SkkTMpjex
https://openreview.net/pdf?id=SkkTMpjex
https://openreview.net/forum?id=SkkTMpjex&noteId=rkgMSRKrx
H1gDgMH4e
SkkTMpjex
ICLR.cc/2017/conference/-/paper593/official/review
{"title": "Official Review", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes an asynchronous distributed K-FAC method for efficient optimization of \ndeep networks. The authors introduce interesting ideas that many computationally demanding \nparts of the original K-FAC algorithm can be efficiently implemented in distributed fashion. The\ngradients and the second-order statistics are computed by distributed workers separately and \naggregated at the parameter server along with the inversion of the approximate Fisher matrix \ncomputed by a separate CPU machine. The experiments are performed in CIFAR-10 and ImageNet\nclassification problems using models such as AlexNet, ResNet, and GoogleReNet.\n\nThe paper includes many interesting ideas and techniques to derive an asynchronous distributed \nversion from the original K-FAC. And the experiments also show good results on a few \ninteresting cases. However, I think the empirical results are not thorough and convincing \nenough yet. Particularly, experiments on various and large number of GPU workers (in the same machine, \nor across multiple workers) are desired. For example, as pointed by the authors in the answer of a comment,\nChen et.al. (Revisiting Distributed Synchronous SGD, 2015) used 100 workers to test their distributed deep \nlearning algorithm. Even considering that the authors have a limitation in computing resource under the \nacademic research setting, the maximum number of 4 or 8 GPUs seems too limited as the only test case of \ndemonstrating the efficiency of a distributed learning algorithm. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Distributed Second-Order Optimization using Kronecker-Factored Approximations
["Jimmy Ba", "Roger Grosse", "James Martens"]
As more computational resources become available, machine learning researchers train ever larger neural networks on millions of data points using stochastic gradient descent (SGD). Although SGD scales well in terms of both the size of dataset and the number of parameters of the model, it has rapidly diminishing returns as parallel computing resources increase. Second-order optimization methods have an affinity for well-estimated gradients and large mini-batches, and can therefore benefit much more from parallel computation in principle. Unfortunately, they often employ severe approximations to the curvature matrix in order to scale to large models with millions of parameters, limiting their effectiveness in practice versus well-tuned SGD with momentum. The recently proposed K-FAC method(Martens and Grosse, 2015) uses a stronger and more sophisticated curvature approximation, and has been shown to make much more per-iteration progress than SGD, while only introducing a modest overhead. In this paper, we develop a version of K-FAC that distributes the computation of gradients and additional quantities required by K-FAC across multiple machines, thereby taking advantage of method’s superior scaling to large mini-batches and mitigating its additional overheads. We provide a Tensorflow implementation of our approach which is easy to use and can be applied to many existing codebases without modification. Additionally, we develop several algorithmic enhancements to K-FAC which can improve its computational performance for very large models. Finally, we show that our distributed K-FAC method speeds up training of various state-of-the-art ImageNet classification models by a factor of two compared to Batch Normalization(Ioffe and Szegedy, 2015).
["Deep learning", "Optimization"]
https://openreview.net/forum?id=SkkTMpjex
https://openreview.net/pdf?id=SkkTMpjex
https://openreview.net/forum?id=SkkTMpjex&noteId=H1gDgMH4e
HkYCUhr4g
Byk-VI9eg
ICLR.cc/2017/conference/-/paper293/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This work brings multiple discriminators into GAN. From the result, multiple discriminators is useful for stabilizing. \n\nThe main problem of stabilizing seems is from gradient signal from discriminator, the authors motivation is using multiple discriminators to reduce this effect.\n\nI think this work indicates the direction is promising, however I think the authors may consider to add more result vs approach which enforce discriminator gradient, such as GAN with DAE (Improving Generative Adversarial Networks with Denoising Feature Matching), to show advantages of multiple discriminators.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
https://openreview.net/forum?id=Byk-VI9eg
https://openreview.net/pdf?id=Byk-VI9eg
https://openreview.net/forum?id=Byk-VI9eg&noteId=HkYCUhr4g
r1D00RZNl
Byk-VI9eg
ICLR.cc/2017/conference/-/paper293/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "In this interesting paper the authors explore the idea of using an ensemble of multiple discriminators in generative adversarial network training. This comes with a number of benefits, mainly being able to use less powerful discriminators which may provide better training signal to the generator early on in training when strong discriminators might overpower the generator.\n\nMy main comment is about the way the paper is presented. The caption of Figure 1. and Section 3.1 suggests using the best discriminator by taking the maximum over the performance of individual ensemble members. This does not appear to be the best thing to do because we are just bound to get a training signal that is stricter than any of the individual members of the ensemble. Then the rest of the paper explores relaxing the maximum and considers various averaging techniques to obtain a \u2019soft-discriminator\u2019. To me, this idea is far more appealing, and the results seem to support this, too. Skimming the paper it seems as if the authors mainly advocated always using the strongest discriminator, evidenced by my premature pre-review question earlier.\n\nOverall, I think this paper is a valuable contribution, and I think the idea of multiple discriminators is an interesting direction to pursue.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
https://openreview.net/forum?id=Byk-VI9eg
https://openreview.net/pdf?id=Byk-VI9eg
https://openreview.net/forum?id=Byk-VI9eg&noteId=r1D00RZNl
B1Ob_V4Ne
Byk-VI9eg
ICLR.cc/2017/conference/-/paper293/official/review
{"title": "Interesting ideas, needs more empirical results.", "rating": "7: Good paper, accept", "review": "The paper extends the GAN framework to accommodate multiple discriminators. The authors motivate this from two points of view:\n\n(1) Having multiple discriminators tackle the task is equivalent to optimizing the value function using random restarts, which can potentially help optimization given the nonconvexity of the value function.\n\n(2) Having multiple discriminators can help overcome the optimization problems arising when a discriminator is too harsh a critic. A generator receiving signal from multiple discriminators is less likely to be receiving poor gradient signal from all discriminators.\n\nThe paper's main idea looks straightforward to implement in practice and makes for a good addition to the GAN training toolbelt.\n\nI am not very convinced by the GAM (and by extension the GMAM) evaluation metric. Without evidence that the GAN game is converging (even approximately), it is hard to make the case that the discriminators tell something meaningful about the generators with respect to the data distribution. In particular, it does not inform on mode coverage or probability mass misallocation.\n\nThe learning curves (Figure 3) look more convincing to me: they provide good evidence that increasing the number of discriminators has a stabilizing effect on the learning dynamics. However, it seems like this figure along with Figure 4 also show that the unmodified generator objective is more stable even with only one discriminator. In that case, is it even necessary to have more than one discriminator to train the generator using an unmodified objective?\n\nOverall, I think the ideas presented in this paper show good potential, but I would like to see an extended analysis in the line of Figures 3 and 4 for more datasets before I think it is ready for publication.\n\nUPDATE: The rating has been revised to a 7 following discussion with the authors.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Generative Multi-Adversarial Networks
["Ishan Durugkar", "Ian Gemp", "Sridhar Mahadevan"]
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
["Deep learning", "Unsupervised Learning", "Games"]
https://openreview.net/forum?id=Byk-VI9eg
https://openreview.net/pdf?id=Byk-VI9eg
https://openreview.net/forum?id=Byk-VI9eg&noteId=B1Ob_V4Ne