note_id
stringlengths 9
12
| forum_id
stringlengths 9
13
| review_title
stringlengths 0
500
| review_body
stringlengths 1
31.1k
| review_rating
stringlengths 0
31.1k
| review_confidence
stringclasses 38
values | review_rating_integer
int64 -1
106
| review_confidence_integer
int64 -1
5
|
---|---|---|---|---|---|---|---|
ryhZ3-M4l | HkwoSDPgg | Nice paper, strong accept | This paper addresses the problem of achieving differential privacy in a very general scenario where a set of teachers is trained on disjoint subsets of sensitive data and the student performs prediction based on public data labeled by teachers through noisy voting. I found the approach altogether plausible and very clearly explained by the authors. Adding more discussion of the bound (and its tightness) from Theorem 1 itself would be appreciated. A simple idea of adding perturbation error to the counts, known from differentially-private literature, is nicely re-used by the authors and elegantly applied in a much broader (non-convex setting) and practical context than in a number of differentially-private and other related papers. The generality of the approach, clear improvement over predecessors, and clarity of the writing makes the method worth publishing. | 9: Top 15% of accepted papers, strong accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 9 | 4 |
HJyf86bNx | HkwoSDPgg | A nice contribution to differentially-private deep learning | Altogether a very good paper, a nice read, and interesting. The work advances the state of the art on differentially-private deep learning, is quite well-written, and relatively thorough.
One caveat is that although the approach is intended to be general, no theoretical guarantees are provided about the learning performance. Privacy-preserving machine learning papers often analyze both the privacy (in the worst case, DP setting) and the learning performance (often under different assumptions). Since the learning performance might depend on the choice of architecture; future experimentation is encouraged, even using the same data sets, with different architectures. If this will not be added, then please justify the choice of architecture used, and/or clarify what can be generalized about the observed learning performance.
Another caveat is that the reported epsilons are not those that can be privately released; the authors note that their technique for doing so would change the resulting epsilon. However this would need to be resolved in order to have a meaningful comparison to the epsilon-delta values reported in related work.
Finally, as has been acknowledged in the paper, the present approach may not work on other natural data types. Experiments on other data sets is strongly encouraged. Also, please cite the data sets used.
Other comments:
Discussion of certain parts of the related work are thorough. However, please add some survey/discussion of the related work on differentially-private semi-supervised learning. For example, in the context of random forests, the following paper also proposed differentially-private semi-supervised learning via a teacher-learner approach (although not denoted as “teacher-learner”). The only time the private labeled data is used is when learning the “primary ensemble.” A "secondary ensemble" is then learned only from the unlabeled (non-private) data, with pseudo-labels generated by the primary ensemble.
G. Jagannathan, C. Monteleoni, and K. Pillaipakkamnatt: A Semi-Supervised Learning Approach to Differential Privacy. Proc. 2013 IEEE International Conference on Data Mining Workshops, IEEE Workshop on Privacy Aspects of Data Mining (PADM), 2013.
Section C. does a nice comparison of approaches. Please make sure the quantitative results here constitute an apples-to-apples comparison with the GAN results.
The paper is extremely well-written, for the most part. Some places needing clarification include:
- Last paragraph of 3.1. “all teachers….get the same training data….” This should be rephrased to make it clear that it is not the same w.r.t. all the teachers, but w.r.t. the same teacher on the neighboring database.
- 4.1: The authors state: “The number n of teachers is limited by a trade-off between the classification task’s complexity and the available data.” However, since this tradeoff is not formalized, the statement is imprecise. In particular, if the analysis is done in the i.i.d. setting, the tradeoff would also likely depend on the relation of the target hypothesis to the data distribution.
- Discussion of figure 3 was rather unclear in the text and caption and should be revised for clarity. In the text section, at first the explanation seems to imply that a larger gap is better (as is also indicated in the caption). However later it is stated that the gap stays under 20%. These sentences seem contradictory, which is likely not what was intended. | 9: Top 15% of accepted papers, strong accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 9 | 4 |
HJNWD6Z4l | HkwoSDPgg | Good theory | This paper discusses how to guarantee privacy for training data. In the proposed approach multiple models trained with disjoint datasets are used as ``teachers'' model, which will train a ``student'' model to predict an output chosen by noisy voting among all of the teachers.
The theoretical results are nice but also intuitive. Since teachers' result are provided via noisy voting, the student model may not duplicate the teacher's behavior. However, the probabilistic bound has quite a number of empirical parameters, which makes me difficult to decide whether the security is 100% guaranteed or not.
The experiments on MNIST and SVHN are good. However, as the paper claims, the proposed approach may be mostly useful for sensitive data like medical histories, it will be nice to conduct one or two experiments on such applications. | 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
BybRJGfNl | SyOvg6jxx | Solid paper | This paper proposed to use a simple count-based exploration technique in high-dimensional RL application (e.g., Atari Games). The counting is based on state hash, which implicitly groups (quantizes) similar state together. The hash is computed either via hand-designed features or learned features (unsupervisedly with auto-encoder). The new state to be explored receives a bonus similar to UCB (to encourage further exploration).
Overall the paper is solid with quite extensive experiments. I wonder how it generalizes to more Atari games. Montezuma’s Revenge may be particularly suitable for approaches that implicitly/explicitly cluster states together (like the proposed one), as it has multiple distinct scenarios, each with small variations in terms of visual appearance, showing clustering structures. On the other hand, such approaches might not work as well if the state space is fully continuous (e.g. in RLLab experiments).
The authors did not answer my question about why the hash code needs to be updated during training. I think it is mainly because the code still needs to be adaptive for a particular game (to achieve lower reconstruction error) in the first few iterations . After that stabilization is the most important. Sec. 2.3 (Learned embedding) is quite confusing (but very important). I hope that the authors could make it more clear (e.g., by writing an algorithm block) in the next version. | 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
BJX3nErVg | SyOvg6jxx | Final review: significant results in an important problem, but many moving parts | The paper proposes a new exploration scheme for reinforcement learning using locality-sensitive hashing states to build a table of visit counts which are then used to encourage exploration in the style of MBIE-EB of Strehl and Littman.
Several points are appealing about this approach: first, it is quite simple compared to the current alternatives (e.g. VIME, density estimation and pseudo-counts). Second, the paper presents results across several domains, including classic benchmarks, continuous control domains, and Atari 2600 games. In addition, there are results for comparison from several other algorithms (DQN variants), many of which are quite recent. The results indicate that the approach clearly improves over the baseline. The results against other exploration algorithms are not as clear (more dependent on the individual domain/game), but I think this is fine as the appeal of the technique is its simplicity. Third, the paper presents results on the sensitivity to the granularity of the abstraction.
I have only one main complaint, which is it seems there was some engineering involved to get this to work, and I do not have much confidence in the robustness of the conclusions. I am left uncertain as to how the story changes given slight perturbations over hyper-parameter values or enabling/disabling of certain choices. For example, how critical was using PixelCNN (or tying the weights?) or noisifying the output in the autoencoder, or what happens if you remove the custom additions to BASS? The granularity results show that the choice of resolution is sensitive, and even across games the story is not consistent.
The authors decide to use state-based counts instead of state-action based counts, deviating from the theory, which is odd because the reason to used LSH in the first place is to get closer to what MBIE-EB would advise via tabular counts. There are several explanations as to why state-based versus state-action based counts perform similarly in Atari; the authors do not offer any. Why?
It seems like the technique could be easily used in DQN as well, and many of the variants the authors compare to are DQN-based, so omitting DQN here again seems strange. The authors justify their choice of TRPO by saying it ensures safe policy improvement, though it is not clear that this is still true when adding these exploration bonuses.
The case study on Montezuma's revenge, while interesting, involves using domain knowledge and so does not really fit well with the rest of the paper.
So, in the end, simple and elegant idea to help with exploration tested in many domains, though I am not certain which of the many pieces are critical for the story to hold versus just slightly helpful, which could hurt the long-term impact of the paper.
--- After response:
Thank you for the thorough response, and again my apologies for the late reply.
I appreciate the follow-up version on the robustness of SimHash and state counting vs. state-action counting.
The paper addresses an important problem (exploration), suggesting a "simple" (compared to density estimation) counting method via hashing. It is a nice alternative approach to the one offered by Bellemare et al. If discussion among reviewers were possible, I would now try to assemble an argument to accept the paper. Specifically, I am not as concerned about beating the state of the art in Montezuma's as Reviewer3 as the merit of the current paper is one the simplicity of the hashing and on the wide comparison of domains vs. the baseline TRPO. This paper shows that we should not give up on simple hashing. There still seems to be a bunch of fiddly bits to get this to work, and I am still not confident that these results are easily reproducible. Nonetheless, it is an interesting new contrasting approach to exploration which deserves attention.
Not important for the decision: The argument in the rebuttal concerning DQN & A3C is a bit of a straw man. I did not mention anything at all about A3C, I strictly referred to DQN, which is less sensitive to parameter-tuning than A3C. Also, Bellemare 2016 main result on Montezuma used DQN. Hence the omission of these techniques applied to DQN still seems a bit strange (for the Atari experiments). The figure S9 from Mnih et al. points to instances of asynchronous one-step Sarsa with varied thread counts.. of course this will be sensitive to parameters: it is both asynchronous online algorithms *and* the parameter varied is the thread count! This is hardly indicative of DQN's sensitivity to parameters, since DQN is (a) single-threaded (b) uses experience replay, leading to slower policy changes. Another source of stability, DQN uses a target network that changes infrequently. Perhaps the authors made a mistake in the reference graph in the figure? (I see no Figure 9 in https://arxiv.org/pdf/1602.01783v2.pdf , I assume the authors meant Figure S9) | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
rkK1pXKNx | SyOvg6jxx | Review | This paper introduces a new way of extending the count based exploration approach to domains where counts are not readily available. The way in which the authors do it is through hash functions. Experiments are conducted on several domains including control and Atari.
It is nice that the authors confirmed the results of Bellemare in that given the right "density" estimator, count based exploration can be effective. It is also great the observe that given the right features, we can crack games like Montezuma's revenge to some extend.
I, however, have several complaints:
First, by using hashing, the authors did not seem to be able to achieve significant improvements over past approaches. Without "feature engineering", the authors achieved only a fraction of the performance achieved in Bellemare et al. on Montezuma's Revenge. The proposed approaches In the control domains, the authors also does not outperform VIME. So experimentally, it is very hard to justify the approach.
Second, hashing, although could be effective in the domains that the authors tested on, it may not be the best way of estimating densities going forward. As the environments get more complicated, some learning methods, are required for the understanding of the environments instead of blind hashing. The authors claim that the advantage of the proposed method over Bellemare et al. is that one does not have to design density estimators. But I would argue that density estimators have become readily available (PixelCNN, VAEs, Real NVP, GANs) that they can be as easily applied as can hashing. Training the density estimators is not difficult problem as more.
| 4: Ok but not good enough - rejection | 3: The reviewer is fairly confident that the evaluation is correct | 4 | 3 |
B15BdW8Vx | Sk8csP5ex | interesting extension of the result of Choromanska et al. but too incremental | This paper shows how spin glass techniques that were introduced in Choromanska et al. to analyze surface loss of deep neural networks can be applied to deep residual networks. This is an interesting contribution but it seems to me that the results are too similar to the ones in Choromanska et al. and thus the novelty is seriously limited. Main theoretical techniques described in the paper were already introduced and main theoretical results mentioned there were in fact already proved. The authors also did not get rid of lots of assumptions from Choromanska et al. (path-independence, assumptions about weights distributions, etc.). | 3: Clear rejection | 3 | -1 |
|
rkva93GNg | Sk8csP5ex | Interesting theoretical analysis (with new supporting experiments) but presented in a slightly confusing fashion. | Summary:
In this paper, the authors study ResNets through a theoretical formulation of a spin glass model. The conclusions are that ResNets behave as an ensemble of shallow networks at the start of training (by examining the magnitude of the weights for paths of a specific length) but this changes through training, through which the scaling parameter C (from assumption A4) increases, causing it to behave as an ensemble of deeper and deeper networks.
Clarity:
This paper was somewhat difficult to follow, being heavy in notation, with perhaps some notation overloading. A summary of some of the proofs in the main text might have been helpful.
Specific Comments:
- In the proof of Lemma 2, I'm not sure where the sequence beta comes from (I don't see how it follows from 11?)
- The ResNet structure used in the paper is somewhat different from normal with multiple layers being skipped? (Can the same analysis be used if only one layer is skipped? It seems like the skipping mostly affects the number of paths there are of a certain length?)
- The new experiments supporting the scale increase in practice are interesting! I'm not sure about Theorems 3, 4 necessarily proving this link theoretically however, particularly given the simplifying assumption at the start of Section 4.2?
| 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
ryTj8pINe | Sk8csP5ex | promising insightful results |
This paper extend the Spin Glass analysis of Choromanska et al. (2015a) to Res Nets which yield the novel dynamic ensemble results for Res Nets and the connection to Batch Normalization and the analysis of their loss surface of Res Nets.
The paper is well-written with many insightful explanation of results. Although the technical contributions extend the Spin Glass model analysis of the ones by Choromanska et al. (2015a), the updated version could eliminate one of the unrealistic assumptions and the analysis further provides novel dynamic ensemble results and the connection to Batch Normalization that gives more insightful results about the structure of Res Nets.
It is essential to show this dynamic behaviour in a regime without batch normalization to untangle the normalization effect on ensemble feature. Hence authors claim that steady increase in the L_2 norm of the weights will maintain the this feature but setting for Figure 1 is restrictive to empirically support the claim. At least results on CIFAR 10 without batch normalization for showing effect of L_2 norm increase and results that support claims about Theorem 4 would strengthen the paper.
This work provides an initial rigorous framework to analyze better the inherent structure of the current state of art Res Net architectures and its variants which can stimulate potentially more significant results towards careful understanding of current state of art models (Rather than always to attempting to improve the performance of Res Nets by applying intuitive incremental heuristics, it is important to progress on some solid understanding too). | 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
SJKENmk4l | BJxhLAuxg | The topic of the paper, model-based RL with a learned model, is important and timely. The paper is well written. I feel that the presented results are too incremental. Augmenting the frame prediction network with another head that predicts the reward is a very sensible thing to do. However neither the methodology not the results are novel / surprising, given that the original method of [Oh et al. 2015] already learns to successfully increment score counters in predicted frames in many games.
I’m very much looking forward to seeing the results of applying the learned joint model of frames and rewards to model-based RL as proposed by the authors. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
|
ryuwhyQ4e | BJxhLAuxg | Final Review | This paper introduces an additional reward-predicting head to an existing NN architecture for video frame prediction. In Atari game playing scenarios, the authors show that this model can successfully predict both reward and next frames.
Pros:
- Paper is well written and easy to follow.
- Model is clear to understand.
Cons:
- The model is incrementally different than the baseline. The authors state that their purpose is to establish a pre-condition, which they achieve. But this makes the paper quite limited in scope.
This paper reads like the start of a really good long paper, or a good short paper. Following through on the future work proposed by the authors would make a great paper. As it stands, the paper is a bit thin on new contributions. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
SkchXXWVe | BJxhLAuxg | Well written paper with a clear focus and interesting future work proposal but with an overall minor contribution. | The paper extends a recently proposed video frame prediction method with reward prediction in order to learn the unknown system dynamics and reward structure of an environment. The method is tested on several Atari games and is able to predict the reward quite well within a range of about 50 steps. The paper is very well written, focussed and is quite clear about its contribution to the literature. The experiments and methods are sound. However, the results are not really surprising given that the system state and the reward are linked deterministically in Atari games. In other words, we can always decode the reward from a network that successfully encodes future system states in its latent representation. The contribution of the paper is therefore minor. The paper would be much stronger if the authors could include experiments on the two future work directions they suggest in the conclusions: augmenting training with artificial samples and adding Monte-Carlo tree search. The suggestions might decrease the number of real-world training samples and increase performance, both of which would be very interesting and impactful. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
rkYg2xjEg | BJmCKBqgl | Why benchmark techniques for IoT on a Xeon? | Dyvedeep presents three approximation techniques for deep vision models aimed at improving inference speed.
The techniques are novel as far as I know.
The paper is clear, the results are plausible.
The evaluation of the proposed techniques is does not make a compelling case that someone interested in faster inference would ultimately be well-served by a solution involving the proposed methods.
The authors delineate "static" acceleration techniques (e.g. reduced bit-width, weight pruning) from "dynamic" acceleration techniques which are changes to the inference algorithm itself. The delineation would be fine if the use of each family of techniques were independent of the other, but this is not the case. For example, the use of SPET would, I think, conflict with the use of factored weight matrices (I recall this from http://papers.nips.cc/paper/5025-predicting-parameters-in-deep-learning.pdf, but I suspect there may be more recent work). For this reason, a comparison between SPET and factored weight matrices would strengthen the case that SPET is a relevant innovation. In favor of the factored-matrix approach, there would I think be fewer hyperparameters and the computations would make more-efficient use of blocked linear algebra routines--the case for the superiority of SPET might be difficult to make.
The authors also do not address their choice of the Xeon for benchmarking, when the use cases they identify in the introduction include "low power" and "deeply embedded" applications. In these sorts of applications, a mobile GPU would be used, not a Xeon. A GPU implementation of a convnet works differently than a CPU implementation in ways that might reduce or eliminate the advantage of the acceleration techniques put forward in this paper.
| 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
BkLHl2ZEe | BJmCKBqgl | Interesting ideas, but I'm not sure about the significance. | This work proposes a number of approximations for speeding up feed-forward network computations at inference time. Unlike much of the previous work in this area which tries to compress a large network, the authors propose algorithms that decide whether to approximate computations for each particular input example.
Speeding up inference is an important problem and this work takes a novel approach. The presentation is exceptionally clear, the diagrams are very beautiful, the ideas are interesting, and the experiments are good. This is a high-quality paper. I especially enjoyed the description of the different methods proposed (SPET, SDSS, SFMA) to exploit patterns in the classifer.
My main concern is that the significance of this work is limited because of the additional complexity and computational costs of using these approximations. In the experiments, the DyVEDeep approach was compared to serial implementations of four large classification models --- inference in these models is order of magnitudes faster on systems that support parallelization. I assume that DyVEDeep has little-to-no performance advantage on a system that allows parallelization, and so anyone looking to speed up their inference on a serial system would want to see a comparison between this approach and the model-compression approaches. Thus, I am not sure how much of an impact this approach can have in it's current state.
Suggestions:
-I wondered what (if any) bounds could be made on the approximation errors of the proposed methods? | 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
H1nMEJZ4g | BJmCKBqgl | Interesting and clearly written paper. My main concerns about this paper, are about the novelty, and the advantages of the proposed techniques over related papers in the area. | The authors describe a series of techniques which can be used to reduce the total amount of computation that needs to be performed in Deep Neural Networks. The authors propose to selectively identify how important a certain set of computations is to the final DNN output, and to use this information to selectively skip certain computations in the network. As deep learning technologies become increasingly widespread on mobile devices, techniques which enable efficient inference on such devices are becoming increasingly important for practical applications.
The paper is generally well-written and clear to follow. I had two main comments that concern the experimental design, and the relationship to previous work:
1. In the context of deployment on mobile devices, computational costs in terms of both system memory as well as processing are important consideration. While the proposed techniques do improve computational costs, they don’t reduce model size in terms of total number of parameters. Also, the gains obtained using the proposed method appear to be similar to other works that do allow for improvements in terms of both memory and computation (see, e.g., (Han et al., 2015)). It would have been interesting if the authors had reported results when the proposed techniques were applied to models that have been compressed in size as well.
S. Han, H. Mao, and W. J. Dally. "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding." arXiv prepring arXiv:1510.00149 (2015).
2. The SDSS technique in the paper appears to be very similar to the “Perforated CNN” technique proposed by Figurnov et al. (2015). In that work, as in the authors work, CNN activations are approximated by interpolating responses from neighbors. The authors should comment on the similarity and differences between the proposed method and the referenced work.
Figurnov, Michael, Dmitry Vetrov, and Pushmeet Kohli. "Perforatedcnns: Acceleration through elimination of redundant convolutions." arXiv preprint arXiv:1504.08362 (2015).
Other minor comments appear below:
3. A clarification question: In comparing the proposed methods to the baseline, in Section 4, the authors mention that they used their own custom implementation. However, do the baselines use the same custom implementation, or do they used the optimized BLAS libraries?
4. The authors should also consider citing the following additional references:
* S. Tan and K. C. Sim, "Towards implicit complexity control using variable-depth deep neural networks for automatic speech recognition," 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 2016, pp. 5965-5969.
* Graves, Alex. "Adaptive Computation Time for Recurrent Neural Networks." arXiv preprint arXiv:1603.08983 (2016).
5. Please explain what the Y-axis in Figure 7 represents in the text.
6. Typographical Error: Last paragraph of Section 2: “... are qualitatively different the aforementioned ...” → “... are qualitatively different from the aforementioned ...” | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
BkcY-CZNl | BJbD_Pqlg | Updated Review | The paper reports several connections between the image representations in state-of-the are object recognition networks and findings from human visual psychophysics:
1) It shows that the mean L1 distance in the feature space of certain CNN layers is predictive of human noise-detection thresholds in natural images.
2) It reports that for 3 different 2-AFC tasks for which there exists a condition that is hard and one that is easy for humans, the mutual information between decision label and quantised CNN activations is usually higher in the condition that is easier for humans.
3) It reproduces the general bandpass nature of contrast/frequency detection sensitivity in humans.
While these findings appear interesting, they are also rather anecdotal and some of them seem to be rather trivial (e.g. findings in 2). To make a convincing statement it would be important to explore what aspects of the CNN lead to the reported findings. One possible way of doing that could be to include good baseline models to compare against. As I mentioned before, one such baseline should be reasonable low-level vision model. Another interesting direction would be to compare the results for the same network at different training stages.
In that way one might be able to find out which parts of the reported results can be reproduced by simple low-level image processing systems, which parts are due to the general deep network’s architecture and which parts arise from the powerful computational properties (object recognition performance) of the CNNs.
In conclusion, I believe that establishing correspondences between state-of-the art CNNs and human vision is a potentially fruitful approach. However to make a convincing point that found correspondences are non-trivial, it is crucial to show that non-trivial aspects of the CNN lead to the reported findings, which was not done. Therefore, the contribution of the paper is limited since I cannot judge whether the findings really tell me something about a unique relation between high-performing CNNs and the human visual system.
UPDATE:
Thank you very much for your extensive revision and inclusion of several of the suggested baselines.
The results of the baseline models often raise more questions and make the interpretation of the results more complex, but I feel that this reflects the complexity of the topic and makes the work rather more worthwhile.
One further suggestion: As the experiments with the snapshots of the CaffeNet shows, the direct relationship between CNN performance and prediction accuracy of biological vision known from Yamins et al. 2014 and Cadieu et al. 2014 does not necessarily hold in your experiments. I think this should be discussed somewhere in the paper.
All in all, I think that the paper now constitutes a decent contribution relating state-of-the art CNNs to human psychophysics and I would be happy for this work to be accepted.
I raise the my rating for this paper to 7. | 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
H19W6GPVl | BJbD_Pqlg | Review of "Human Perception in Computer Vision" | The author works to compare DNNs to human visual perception, both quantitatively and qualitatively.
Their first result involves performing a psychophysical experiment both on humans and on a model and then comparing the results (actually I think the psychophysical data was collected in a different work, and is just used here). The specific psychophysical experiment determined, separately for each of a set of approx. 1110 images, what the noise level of additive noise would have to be to make a just-noticeable-difference for humans in discriminating the noiseless image from the noisy one. The authors then define a metric on neural networks that allows them to measure what they posit might be a similar property for the networks. They then correlate the pattern of noise levels between neural networks that the humans. Deep neural networks end up being much better predictors of the human pattern of noise levels than simpler measure of image perturbation (e.g. RMS contrast).
A second result involves comparing DNNs to humans in terms of their pattern errors in a series of highly controlled experiments using stimuli that illustrate classic properties of human visual processing -- including segmentation, crowding and shape understanding. They then used an information-theoretic single-neuron metric of discriminability to assess similar patterns of errors for the DNNs. Again, top layers of DNNs were able to reproduce the human patterns of difficulty across stimuli, at least to some extent.
A third result involves comparing DNNs to humans in terms of their pattern of contrast sensitivity across a series of sine-grating images at different frequencies. (There is a classic result from vision research as to what this pattern should be, so it makes a natural target for comparison to models.) The authors define a DNN correlate for the propertie in terms of the cross-neuron average of the L1-distance between responses to a blank image and responses to a sinuisoid of each contrast and frequency. They then qualitatively compare the results of this metric for DNNs models to known results from the literature on humans, finding that, like humans, there is an apparent bandpass response for low-contrast gratings and a mostly constant response at high contrast.
Pros:
* The general concept of comparing deep nets to psychophysical results in a detailed, quantitative way, is really nice.
* They nicely defined a set of "linking functions", e.g. metrics that express how a specific behavioral result is to be generated from the neural network. (Ie. the L1 metrics in results 1 and 3 and the information-theoretic measure in result 2.) The framework for setting up such linking functions seems like a great direction to me.
* The actual psychophysical data seems to have been handled in a very careful and thoughtful way. These folks clearly know what they're doing on the psychophysical end.
Cons:
* To my mind, the biggest problem wit this paper is that that it doesn't say something that we didn't really know already. Existing results have shown that DNNs are pretty good models of the human visual system in a whole bunch of ways, and this paper adds some more ways. What would have been great would be:
(a) showing that they metric of comparison to humans that was sufficiently sensitive that it could pull apart various DNN models, making one clearly better than the others.
(b) identifying a wide gap between the DNNs and the humans that is still unfilled. They sort of do this, since while the DNNs are good at reproducing the human judgements in Result 1, they are not perfect -- gap is between 60% explained variance and 84% inter-human consistency. This 24% gap is potentially important, so I'd really like to see them have explored that gap more -- e.g. (i) widening the gap by identifying which images caused the gap most and focusing a test on those, or (ii) closing the gap by training a neural network to get the pattern 100% correct and seeing if that made better CNNs as measured on other metrics/tasks.
In other words, I would definitely have traded off not having results 2 and 3 for a deeper exploration of result 1. I think their overall approach could be very fruitful, but it hasn't really been carried far enough here.
* I found a few things confusing about the layout of the paper. I especially found that the quantitative results for results 2 and 3 were not clearly displayed. Why was figure 8 relegated to the appendix? Where are the quantifications of model-human similarities for the data shown in Figure 8? Isn't this the whole meat of their second result? This should really be presented in a more clear way.
* Where is the quantification of model-human similarity for the data show in Figure 3? Isn't there a way to get the human contrast-sensitivity curve and then compare it to that of models in a more quantitively precise way, rather than just note a qualitative agreement? It seems odd to me that this wasn't done.
| 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
ByL97qNEg | BJbD_Pqlg | Review of "HUMAN PERCEPTION IN COMPUTER VISION" | This paper compares the performance, in terms of sensitivity to perturbations, of multilayer neural networks to human vision. In many of the tasks tested, multilayer neural networks exhibit similar sensitivities as human vision.
From the tasks used in this paper one may conclude that multilayer neural networks capture many properties of the human visual system. But of course there are well known adversarial examples in which small, perceptually invisible perturbations cause catastrophic errors in categorization, so against that backdrop it is difficult to know what to make of these results. That the two systems exhibit similar phenomenologies in some cases could mean any number of things, and so it would have been nice to see a more in depth analysis of why this is happening in some cases and not others. For example, for the noise perturbations described in the the first section, one sees already that conv2 is correlated with human sensitivity. So why not examine how the first layer filters are being combined to produce this contextual effect? From that we might actually learn something about neural mechanisms.
Although I like and am sympathetic to the direction the author is taking here, I feel it just scratches the surface in terms of analyzing perceptual correlates in multilayer neural nets.
| 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
HkMx83V4l | HJ0NvFzxl | Complex implementation of a differentiable memory as a graph with promising preliminary results. | This paper proposes learning on the fly to represent a dialog as a graph (which acts as the memory), and is first demonstrated on the bAbI tasks. Graph learning is part of the inference process, though there is long term representation learning to learn graph transformation parameters and the encoding of sentences as input to the graph. This seems to be the first implementation of a differentiable memory as graph: it is much more complex than previous approaches like memory networks without significant gain in performance in bAbI tasks, but it is still very preliminary work, and the representation of memory as a graph seems much more powerful than a stack. Clarity is a major issue, but from an initial version that was constructive and better read by a computer than a human, the author proposed a hugely improved later version. This original, technically accurate (within what I understood) and thought provoking paper is worth publishing.
The preliminary results do not tell us yet if the highly complex graph-based differentiable memory has more learning or generalization capacity than other approaches. The performance on the bAbI task is comparable to the best memory networks, but still worse than more traditional rule induction (see http://www.public.asu.edu/~cbaral/papers/aaai2016-sub.pdf). This is still clearly promising.
The sequence of transformation in algorithm 1 looks sensible, though the authors do not discuss any other operation ordering. In particular, it is not clear to me that you need the node state update step T_h if you have the direct reference update step T_h,direct.
It is striking that the only trick that is essential for proper performance is the ‘direct reference’ , which actually has nothing to do with the graph building process, but is rather an attention mechanism for the graph input: attention is focused on words that are relevant to the node type rather than the whole sentence. So the question “how useful are all these graph operations” remain. A much simpler version of a similar trick may have been proposed in the context of memory networks, also for ICLR'17 (see match type in "LEARNING END-TO-END GOAL-ORIENTED DIALOG" by Bordes et al)
The authors also mention the time and size needed to train the model: is the issue arising for learning, inference or both? A description of the actual implementation would help (no pointer to open source code is provide). The author mentions Theano in one of my questions: how are the transformations compiled in advance as units? How is the gradient back-propagated through the graph is this one is only described at runtime?
Typo: in the appendices B.2 and B.2.1, the right side of the equation that applies the update gate has h’_nu while it should be h_nu.
In the references, the author could mention the pioneering work of Lee Giles on representing graphs with RNNs.
Revision: I have improved my rating for the following reasons:
- Pointers to an highly readable and well structured Theano source is provided.
- The delta improvement of the paper has been impressive over the review process, and I am confident this will be an impactful paper.
- Much simpler alternatives approaches such as Memory Networks seem to be plateauing for problems such as dialog modeling, we need alternatives.
- The architecture is this work is still too complex, but this is often as we start with DNNs, and then find simplifications that actually improve performance
| 9: Top 15% of accepted papers, strong accept | 3: The reviewer is fairly confident that the evaluation is correct | 9 | 3 |
Hk_mPh-4e | HJ0NvFzxl | The paper proposes an extension of the Gated Graph Sequence Neural Network by including in this model the ability to produce complex graph transformations. The underlying idea is to propose a method that will be able build/modify a graph-structure as an internal representation for solving a problem, and particularly for solving question-answering problems in this paper. The author proposes 5 different possible differentiable transformations that will be learned on a training set, typically in a supervised fashion where the state of the graph is given at each timestep. A particular occurence of the model is presented that takes a sequence as an input a iteratively update an internal graph state to a final prediction, and which can be applied for solving QA tasks (e.g BaBi) with interesting results.
The approach in this paper is really interesting since the proposed model is able to maintain a representation of its current state as a complex graph, but still keeping the property of being differentiable and thus easily learnable through gradient-descent techniques. It can be seen as a succesfull attempt to mix continuous and symbolic representations. It moreover seems more general that the recent attempts made to add some 'symbolic' stuffs in differentiable models (Memory networks, NTM, etc...) since the shape of the state is not fixed here and can evolve. My main concerns is about the way the model is trained i.e by providing the state of the graph at each timestep which can be done for particular tasks (e.g Babi) only, and cannot be the solution for more complex problems. My other concern is about the whole content of the paper that would perhaps best fit a journal format and not a conference format, making the article still difficult to read due to its density. | 9: Top 15% of accepted papers, strong accept | 3: The reviewer is fairly confident that the evaluation is correct | 9 | 3 |
|
SkibszLEx | HJ0NvFzxl | Architecture which allows to learn graph->graph tasks, improves state of the art on babi | The main contribution of this paper seems to be an introduction of a set of differential graph transformations which will allow you to learn graph->graph classification tasks using gradient descent. This maps naturally to a task of learning a cellular automaton represented as sequence of graphs. In that task, the graph of nodes grows at each iteration, with nodes pointing to neighbors and special nodes 0/1 representing the values. Proposed architecture allows one to learn this sequence of graphs, although in the experiment, this task (Rule 30) was far from solved.
This idea is combined with ideas from previous papers (GGS-NN) to allow the model to produce textual output rather than graph output, and use graphs as intermediate representation, which allows it to beat state of the art on BaBi tasks. | 7: Good paper, accept | 7 | -1 |
|
Hkes73e4g | S1Bb3D5gg | Review | This paper presents a new, public dataset and tasks for goal-oriented dialogue applications. The dataset and tasks are constructed artificially using rule-based programs, in such a way that different aspects of dialogue system performance can be evaluated ranging from issuing API calls to displaying options, as well as full-fledged dialogue.
This is a welcome contribution to the dialogue literature, which will help facilitate future research into developing and understanding dialogue systems. Still, there are pitfalls in taking this approach. First, it is not clear how suitable Deep Learning models are for these tasks compared to traditional methods (rule-based systems or shallow models), since Deep Learning models are known to require many training examples and therefore performance difference between different neural networks may simply boil down to regularization techniques. The tasks 1-5 are also completely deterministic, which means evaluating performance on these tasks won't measure the ability of the models to handle noisy and ambiguous interactions (e.g. inferring a distribution over user goals, or executing dialogue repair strategies), which is a very important aspect in dialogue applications. Overall, I still believe this is an interesting direction to explore.
As discussed in the comments below, the paper does not have any baseline model with word order information. I think this is a strong weakness of the paper, because it makes the neural networks appear unreasonably strong, yet simpler baselines could very likely be be competitive (or better) than the proposed neural networks. To maintain a fair evaluation and correctly assess the power of representation learning for this task, I think it's important that the authors experiment with one additional non-neural network benchmark model which takes into account word order information. This would more convincly demonstrate the utility of Deep Learning models for this task. For example, the one could experiment with a logistic regression model which takes as input 1) word embeddings (similar to the Supervised Embeddings model), 2) bi-gram features, and 3) match-type features. If such a baseline is included, I will increase my rating to 8.
Final minor comment: in the conclusion, the paper states "the existing work has no well defined measures of performances". This is not really true. End-to-end trainable models for task-oriented dialogue have well-defined performance measures. See, for example "A Network-based End-to-End Trainable Task-oriented Dialogue System" by Wen et al. On the other hand, non-goal-oriented dialogue are generally harder to evaluate, but given human subjects these can also be evaluated. In fact, this is what Liu et al (2016) do for Twitter. See also, "Strategy and Policy Learning for Non-Task-Oriented Conversational Systems" by Yu et al.
----
I've updated my score following the new results added in the paper. | 8: Top 50% of accepted papers, clear accept | 8 | -1 |
|
Bk118K4Ne | S1Bb3D5gg | Thought provoking paper, more on the metrics than the algorithms. | Attempts to use chatbots for every form of human-computer interaction has been a major trend in 2016, with claims that they could solve many forms of dialogs beyond simple chit-chat. This paper represents a serious reality check. While it is mostly relevant for Dialog/Natural Language venues (to educate software engineer about the limitations of current chatbots), it can also be published at Machine Learning venues (to educate researchers about the need for more realistic validation of ML applied to dialogs), so I would consider this work of high significance.
Two important conjectures are underlying this paper and likely to open to more research. While they are not in writing, Antoine Bordes clearly stated them during a NIPS workshop presentation that covered this work. Considering the metrics chosen in this paper:
1) The performance of end2end ML approaches is still insufficient for goal oriented dialogs.
2) When comparing algorithms, relative performance on synthetic data is a good predictor of performance on natural data. This would be quite a departure from previous observations, but the authors made a strong effort to match the synthetic and natural conditions.
While its original algorithmic contribution consists in one rather simple addition to memory networks (match type), it is the first time these are deployed and tested on a goal-oriented dialog, and the experimental protocol is excellent. The overall paper clarity is excellent and accessible to a readership beyond ML and dialog researchers. I was in particular impressed by how the short appendix on memory networks summarized them so well, followed by the tables that explained the influence of the number of hops.
While this paper represents the state-of-the-art in the exploration of more rigorous metrics for dialog modeling, it also reminds us how brittle and somewhat arbitrary these remain. Note this is more a recommendation for future research than for revision.
First they use the per-response accuracy (basically the next utterance classification among a fixed list of responses). Looking at table 3 clearly shows how absurd this can be in practice: all that matters is a correct API call and a reasonably short dialog, though this would only give us a 1/7 accuracy, as the 6 bot responses needed to reach the API call also have to be exact.
Would the per-dialog accuracy, where all responses must be correct, be better? Table 2 shows how sensitive it is to the experimental protocol. I was initially puzzled that the accuracy for subtask T3 (0.0) was much lower that the accuracy for the full dialog T5 (19.7), until the authors pointed me to the tasks definitions (3.1.1) where T3 requires displaying 3 options while T5 only requires displaying one.
For the concierge data, what would happen if ‘correct’ meant being the best, not among the 5-best?
While I cannot fault the authors for using standard dialog metrics, and coming up with new ones that are actually too pessimistic, I can think of one way to represent dialogs that could result in more meaningful metrics in goal oriented dialogs. Suppose I sell Virtual Assistants as a service, being paid upon successful completion of a dialog. What is the metric that would maximize my revenue? In this restaurant problem, the loss would probably be some weighted sum of the number of errors in the API call, the number of turns to reach that API call and the number of rejected options by the user. However, such as loss cannot be measured on canned dialogs and would either require a real human user or an realistic simulator
Another issue closely related to representation learning that this paper fails to address or explain properly is what happens if the vocabulary used by the user does not match exactly the vocabulary in the knowledge base. In particular, for the match type algorithm to code ‘Indian’ as ‘type of cuisine’, this word would have to occur exactly in the KB. I can imagine situations where the KB uses some obfuscated terminology, and we would like ML to learn the associations rather than humans to hand-describe them.
| 8: Top 50% of accepted papers, clear accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 8 | 4 |
rky-ix7Ee | S1Bb3D5gg | Review | SYNOPSIS:
This paper introduces a new dataset for evaluating end-to-end goal-oriented dialog systems. All data is generated in the restaurant setting, where the goal is to find availability and eventually book a table based on parameters provided by the user to the bot as part of a dialog. Data is generated by running a simulation using an underlying knowledge base to generate samples for the different parameters (cuisine, price range, etc), and then applying rule-based transformations to render natural language descriptions. The objective is to rank a set of candidate responses for each next turn of the dialog, and evaluation is reported in terms of per-response accuracy and per-dialog accuracy. The authors show that Memory Networks are able to improve over basic bag-of-words baselines.
THOUGHTS:
I want to thank the authors for an interesting contribution. Having said that, I am skeptical about the utility of end-to-end trained systems in the narrow-domain setting. In the open-domain setting, there is a strong argument to be made that hand-coding all states and responses would not scale, and hence end-to-end trained methods make a lot of sense. However, in the narrow-domain setting, we usually know and understand the domain quite well, and the goal is to obtain high user satisfaction. Doesn't it then make sense in these cases to use the domain knowledge to engineer the best system possible?
Given that the domain is already restricted, I'm also a bit disappointed that the goal is to RANK instead of GENERATE responses, although I understand that this makes evaluation much easier. I'm also unsure how these candidate responses would actually be obtained in practice? It seems that the models rank the set of all responses in train/val/test (last sentence before Sec 3.2). Since a key argument for the end-to-end training approach is ease of scaling to new domains without having to manually re-engineer the system, where is this information obtained for a new domain in practice? Generating responses would allow much better generalization to new domains, as opposed to simply ranking some list of hand-collected generic responses, and in my mind this is the weakest part of this work.
Finally, as data is generated using a simulation by expanding (cuisine, price, ...) tuples using NL-generation rules, it necessarily constrains the variability in the training responses. Of course, this is traded off with the ability to generate unlimited data using the simulator. But I was unable to see the list of rules that was used. It would be good to publish this as well.
Overall, despite my skepticism, I think it is an interesting contribution worthy of publication at the conference.
------
I've updated my score following the clarifications and new results. | 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
r1w-zAZ4e | r10FA8Kxg | Experimental comparison of shallow, deep, and (non)-convolutional architectures with a fixed parameter budget | This paper aims to investigate the question if shallow non-convolutional networks can be as affective as deep convolutional ones for image classification, given that both architectures use the same number of parameters.
To this end the authors conducted a series of experiments on the CIFAR10 dataset.
They find that there is a significant performance gap between the two approaches, in favour of deep CNNs.
The experiments are well designed and involve a distillation training approach, and the results are presented in a comprehensive manner.
They also observe (as others have before) that student models can be shallower than the teacher model from which they are trained for comparable performance.
My take on these results is that they suggest that using (deep) conv nets is more effective, since this model class encodes a form of a-prori or domain knowledge that images exhibit a certain degree of translation invariance in the way they should be processed for high-level recognition tasks. The results are therefore perhaps not quite surprising, but not completely obvious either.
An interesting point on which the authors comment only very briefly is that among the non-convolutional architectures the ones using 2 or 3 hidden layers outperform those with 1, 4 or 5 hidden layers. Do you have an interpretation / hypothesis of why this is the case? It would be interesting to discuss the point a bit more in the paper.
It was not quite clear to me why were the experiments were limited to use 30M parameters at most. None of the experiments in Figure 1 seem to be saturated. Although the performance gap between CNN and MLP is large, I think it would be worthwhile to push the experiment further for the final version of the paper.
The authors state in the last paragraph that they expect shallow nets to be relatively worse in an ImageNet classification experiment.
Could the authors argue why they think this to be the case?
One could argue that the much larger training dataset size could compensate for shallow and/or non-convolutional choices of the architecture.
Since MLPs are universal function approximators, one could understand architecture choices as expressions of certain priors over the function space, and in a large-data regimes such priors could be expected to be of lesser importance.
This issue could for example be examined on ImageNet when varying the amount of training data.
Also, the much higher resolution of ImageNet images might have a non-trivial impact on the CNN-MLP comparison as compared to the results established on the CIFAR10 dataset.
Experiments on a second data set would also help to corroborate the findings, demonstrating to what extent such findings are variable across datasets.
| 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
BkaSqlzEe | r10FA8Kxg | Experimental paper with interesting results. Well written. Solid experiments. | Description.
This paper describes experiments testing whether deep convolutional networks can be replaced with shallow networks with the same number of parameters without loss of accuracy. The experiments are performed on he CIFAR 10 dataset where deep convolutional teacher networks are used to train shallow student networks using L2 regression on logit outputs. The results show that similar accuracy on the same parameter budget can be only obtained when multiple layers of convolution are used.
Strong points.
- The experiments are carefully done with thorough selection of hyperparameters.
- The paper shows interesting results that go partially against conclusions from the previous work in this area (Ba and Caruana 2014).
- The paper is well and clearly written.
Weak points:
- CIFAR is still somewhat toy dataset with only 10 classes. It would be interesting to see some results on a more challenging problem such as ImageNet. Would the results for a large number of classes be similar?
Originality:
- This is mainly an experimental paper, but the question it asks is interesting and worth investigation. The experimental results are solid and provide new insights.
Quality:
- The experiments are well done.
Clarity:
- The paper is well written and clear.
Significance:
- The results go against some of the conclusions from previous work, so should be published and discussed.
Overall:
Experimental paper with interesting results. Well written. Solid experiments.
| 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
BkxN0nr4l | Hk85q85ee | Optimization of a ReLU network under new assumptions | This work analyzes the continuous-time dynamics of gradient descent when training two-layer ReLU networks (one input, one output, thus only one layer of ReLU units). The work is interesting in the sense that it does not involve some unrealistic assumptions used by previous works with similar goal. Most importantly, this work does not assume independence between input and activations, and it does not rely on noise injection (which can simplify the analysis). Nonetheless, removing these simplifying assumptions comes at the expense of limiting the analysis to:
1. Only one layer of nonlinear units
2. Discarding the bias term in ReLU while keeping the input Gaussian (thus constant input trick cannot be used to simulate the bias term).
3. Imposing strong assumption on the representation on the input/output via (bias-less) ReLU networks: existence of orthonormal bases to represent this relationships.
Having that said, as far as I can tell, the paper presents original analysis in this new setting, which is interesting and valuable. For example, by exploiting the symmetry in the problem under the assumption 3 I listed above, the authors are able to reduce the high-dimensional dynamics of the gradient descent to a bivariate dynamics (instead of dealing with original size of the parameters). Such reduction to 2D allows the author to rigorously analyze the behavior of the dynamics (e.g. convergence to a saddle point in symmetric case, or to the optimum in non-symmetric case).
Clarification Needed: first paragraph of page 2. Near the end of the paragraph you say "Initialization can be arbitrarily close to origin", but at the beginning of the same paragraph you state "initialized randomly with standard deviation of order 1/sqrt(d)". Aren't these inconsistent?
Some minor comments about the draft:
1. In section 1, 2nd paragraph: "We assume x is Gaussian and thus the network is bias free". Do you mean "zero-mean" Gaussian then?
2. "standard deviation" is spelled "standard derivation" multiple times in the paper.
3. Page 6, last paragraph, first line: Corollary 4.1 should be Corollary 4.2
| 8: Top 50% of accepted papers, clear accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 8 | 4 |
SJVUCuuNg | Hk85q85ee | Potentially new analysis, but hard to read | The paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice.
The paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually.
Overall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results. | 4: Ok but not good enough - rejection | 3: The reviewer is fairly confident that the evaluation is correct | 4 | 3 |
HkAvHKxNl | Hk85q85ee | Hard to read paper; unclear conclusions. | In this paper, the author analyzes the convergence dynamics of a single layer non-linear network under Gaussian iid input assumptions. The first half of the paper, dealing with a single hidden node, was somewhat clear, although I have some specific questions below. The second half, dealing with multiple hidden nodes, was very difficult for me to understand, and the final "punchline" is quite unclear. I think the author should focus on intuition and hide detailed derivations and symbols in an appendix.
In terms of significance, it is very hard for me to be sure how generalizable these results are: the Gaussian assumption is a very strong one, and so is the assumption of iid inputs. Real-world feature inputs are highly correlated and are probably not Gaussian. Such assumptions are not made (as far as I can tell) in recent papers analyzing the convergence of deep networks e.g. Kawaguchi, NIPS 2016. Although the author says the no assumption is made on the independence of activations, this assumption is shifted to the input instead. I think this means that the activations are combinations of iid random variables, and are probably Gaussian like, right? So I'm not sure where this leaves us.
Specific comments:
1. Please use D_w instead of D to show that D is a function of w, and not a constant. This gets particularly confusing when switching to D(w) and D(e) in Section 3. In general, notation in the paper is hard to follow and should be clearly introduced.
2. Section 3, statement that says "when the neuron is cut off at sample l, then (D^(t))_u" what is the relationship between l and u? Also, this is another example of notational inconsistency that causes problems to the reader.
3. Section 3.1, what is F(e, w) and why is D(e) introduced? This was unclear to me.
4. Theorem 3.3 suggests that (if \epsilon is > 0), then to have the maximal probability of convergence, \epsilon should be very close to 0, which means that the ball B_r has radius r -> 0? This seems contradictory from Figure 2.
5. Section 4 was really unclear and I still do not understand what the symmetry group really represents. Is there an intuitive explanation why this is important?
6. Figure 5: what is a_j ?
I encourage the author to rewrite this paper for clarity. In it's present form, it would be very difficult to understand the takeaways from the paper. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
rkCS99SVl | Skvgqgqxe | official review | The paper proposes to use reinforcement learning to learn how to compose the words in a sentence, i.e. parse tree, that can be helpful for the downstream tasks. To do that, the shift-reduce framework is employed and RL is used to learn the policy of the two actions SHIFT and REDUCE. The experiments on four datasets (SST, SICK, IMDB, and SNLI) show that the proposed approach outperformed the approach using predefined tree structures (e.g. left-to-right, right-to-left).
The paper is well written and has two good points. Firstly, the idea of using RL to learn parse trees using downstream tasks is very interesting and novel. And employing the shift-reduce framework is a very smart choice because the set of actions is minimal (shift and reduce). Secondly, what shown in the paper somewhat confirms the need of parse trees. This is indeed interesting because of the current debate on whether syntax is helpful.
I have the following comments:
- it seems that the authors weren't aware of some recent work using RL to learn structures for composition, e.g. Andreas et al (2016).
- because different composition functions (e.g. LSTM, GRU, or classical recursive neural net) have different inductive biases, I was wondering if the tree structures found by the model would be independent from the composition function choice.
- because RNNs in theory are equivalent to Turing machines, I was wondering if restricting the expressiveness of the model (e.g. reducing the dimension) can help the model focus on discovering more helpful tree structures.
Ref:
Andreas et al. Learning to Compose Neural Networks for Question Answering. NAACL 2016 | 8: Top 50% of accepted papers, clear accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 8 | 4 |
r19SqUiNe | Skvgqgqxe | Accept | I have not much to add to my pre-review comments.
It's a very well written paper with an interesting idea.
Lots of people currently want to combine RL with NLP. It is very en vogue.
Nobody has gotten that to work yet in any really groundbreaking or influential way that results in actually superior performance on any highly relevant or competitive NLP task.
Most people struggle with the fact that NLP requires very efficient methods on very large datasets and RL is super slow.
Hence, I believe this direction hasn't shown much promise yet and it's not yet clear it ever will due to the slowness of RL.
But many directions need to be explored and maybe eventually they will reach a point where they become relevant.
It is interesting to learn the obviously inherent grammatical structure in language though sadly again, the trees here do not yet capture much of what our intuitions are.
Regardless, it's an interesting exploration, worthy of being discussed at the conference.
| 7: Good paper, accept | 7 | -1 |
|
B1OyMaWNg | Skvgqgqxe | Weak experimental results | In this paper, the authors propose a new method to learn hierarchical representations of sentences, based on reinforcement learning. They propose to learn a neural shift-reduce parser, such that the induced tree structures lead to good performance on a downstream task. They use reinforcement learning (more specifically, the policy gradient method REINFORCE) to learn their model. The reward of the algorithm is the evaluation metric of the downstream task. The authors compare two settings, (1) no structure information is given (hence, the only supervision comes from the downstream task) and (2) actions from an external parser is used as supervision to train the policy network, in addition to the supervision from the downstream task. The proposed approach is evaluated on four tasks: sentiment analysis, semantic relatedness, textual entailment and sentence generation.
I like the idea of learning tree representations of text which are useful for a downstream task. The paper is clear and well written. However, I am not convinced by the experimental results presented in the paper. Indeed, on most tasks, the proposed model is far from state-of-the-art models:
- sentiment analysis, 86.5 v.s. 89.7 (accuracy);
- semantic relatedness, 0.32 v.s. 0.25 (MSE);
- textual entailment, 80.5 v.s. 84.6 (accuracy).
From the results presented in the paper, it is hard to know if these results are due to the model, or because of the reinforcement learning algorithm.
PROS:
- interesting idea: learning structures of sentences adapted for a downstream task.
- well written paper.
CONS:
- weak experimental results (do not really support the claim of the authors).
Minor comments:
In the second paragraph of the introduction, one might argue that bag-of-words is also a predominant approach to represent sentences.
Paragraph titles (e.g. in section 3.2) should have a period at the end.
----------------------------------------------------------------------------------------------------------------------
UPDATE
I am still not convinced by the results presented in the paper, and in particular by the fact that one must combine the words in a different way than left-to-right to obtain state of the art results.
However, I do agree that this is an interesting research direction, and that the results presented in the paper are promising. I am thus updating my score from 5 to 6. | 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
BJ_0DiWNx | BymIbLKgl | Limited theoretical novelty and evaluation | Authors show that a contrastive loss for a Siamese architecture can be used for learning representations for planar curves. With the proposed framework, authors are able to learn a representation which is comparable to traditional differential or integral invariants, as evaluated on few toy examples.
The paper is generally well written and shows an interesting application of the Siamese architecture. However, the experimental evaluation and the results show that these are rather preliminary results as not many of the choices are validated. My biggest concern is in the choice of the negative samples, as the network basically learns only to distinguish between shapes at different scales, instead of recognizing different shapes. It is well known fact that in order to achieve a good performance with the contrastive loss, one has to be careful about the hard negative sampling, as using too easy negatives may lead to inferior results. Thus, this may be the underlying reason for such choice of the negatives? Unfortunately, this is not discussed in the paper.
Furthermore the paper misses a more thorough quantitative evaluation and concentrates more on showing particular examples, instead of measuring more robust statistics over multiple curves (invariance to noise and sampling artifacts).
In general, the paper shows interesting first steps in this direction, however it is not clear whether the experimental section is strong and thorough enough for the ICLR conference. Also the novelty of the proposed idea is limited as Siamese networks are used for many years and this work only shows that they can be applied to a different task. | 5: Marginally below acceptance threshold | 5 | -1 |
|
HJehdh-4e | BymIbLKgl | filling a much needed gap? | I'm torn on this one. Seeing the MPEG-7 dataset and references to curvature scale space brought to mind the old saying that "if it's not worth doing, it's not worth doing well." There is no question that the MPEG-7 dataset/benchmark got saturated long ago, and it's quite surprising to see it in a submission to a modern ML conference. I brought up the question of "why use this representation" with the authors and they said their "main purpose was to connect the theory of differential geometry of curves with the computational engine of a convolutional neural network." Fair enough. I agree these are seemingly different fields, and the authors deserve some credit for connecting them. If we give them the benefit of the doubt that this was worth doing, then the approach they pursue using a Siamese configuration makes sense, and their adaptation of deep convnet frameworks to 1D signals is reasonable. To the extent that the old invariant based methods made use of smoothed/filtered representations coupled with nonlinearities, it's sensible to revisit this problem using convnets. I wouldn't mind seeing this paper accepted, since it's different from the mainstream, but I worry about there being too narrow an audience at ICLR that still cares about this type of shape representation. | 6: Marginally above acceptance threshold | 6 | -1 |
|
B10ljK-Nl | BymIbLKgl | An interesting representation | Pros :
- New representation with nice properties that are derived and compared with a mathematical baseline and background
- A simple algorithm to obtain the representation
Cons :
- The paper sounds like an applied maths paper, but further analysis on the nature of the representation could be done, for instance, by understanding the nature of each layer, or at least, the first.
| 8: Top 50% of accepted papers, clear accept | 3: The reviewer is fairly confident that the evaluation is correct | 8 | 3 |
Ske_zvGNl | rJ8Je4clg | Intriguing idea, but lacking theoretical and empirical validation | In this paper, a Q-Learning variant is proposed that aims at "propagating" rewards faster by adding extra costs corresponding to bounds on the Q function, that are based on both past and future rewards. This leads to faster convergence, as shown on the Atari Learning Environment benchmark.
The paper is well written and easy to follow. The core idea of using relaxed inequality bounds in the optimization problem is original to the best of my knowledge, and results seem promising.
This submission however has a number of important shortcomings that prevent me from recommending it for publication at ICLR:
1. The theoretical justification and analysis is very limited. As far as I can tell the bounds as defined require a deterministic reward to hold, which is rarely the case in practice. There is also the fact that the bounds are computed using the so-called "target network" with different parameters theta-, which is another source of discrepancy. And even before that, the bounds hold for Q* but are applied on Q for which they may not be valid until Q gets close enough to Q*. It also looks weird to take the max over k in (1, ..., K) when the definition of L_j,k makes it look like the max has to be L_j,1 (or even L_j,0, but I am not sure why that one is not considered), since L*_j,0 >= L*_j,1 >= ... >= L*_j,K. Neither of these issues are discussed in the paper, and there is no theoretical analysis of the convergence properties of the proposed method.
[Update: some of these concerns were addressed in OpenReview comments]
2. The empirical evaluation does not compensate, in my opinion, for the lack of theory. First, since there are two bounds introduced, I would have expected "ablative" experiments showing the improvement brought by each one independently. It is also unfortunate that the authors did not have time to let their algorithm run longer, since as shown in Fig. 1 there remain a significant amount of games where it performs worse compared to DQN. In addition, comparisons are limited to vanilla DQN and DDQN: I believe it would have been important to compare to other ways of incorporating longer-term rewards, like n-step Q-Learning or actor-critic. Finally, there is no experiment demonstrating that the proposed algorithm can indeed improve other existing DQN variants: I agree with the author when they say "We believe that our method can be readily combined with other techniques developed for DQN", however providing actual results showing this would have made the paper much stronger.
In conclusion, I do believe this line of research is worth pursuing, but also that additional work is required to really prove and understand its benefits.
Minor comments:
- Instead of citing the arxiv Wang et al (2015), it would be best to cite the 2016 ICML paper
- The description of Q-Learning in section 3 says "The estimated future reward is computed based on the current state s or a series of past states s_t if available." I am not sure what you mean by "a series of past states", since Q is defined as Q(s, a) and thus can only take the current state s as input, when defined this way.
- The introduction of R_j in Alg. 1 is confusing since its use is only explained later in the text (in section 5 "In addition, we also incorporate the discounted return R_j in the lower bound calculation to further stabilize the training")
- In Fig. S1 the legend should not say "10M" since the plot is from 1M to 10M | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
BJhbTXKEx | rJ8Je4clg | Review | In this paper, the authors proposed a extension to the DQN algorithm by introducing both an upper and lower bound to the optimal Q function. The authors show experimentally that this approach improves the data efficiency quite dramatically such that they can achieve or even supersede the performance of DQN that is trained in 8 days.
The idea is novel to the best of my knowledge and the improvement over DQN seems very significant.
Recently, Remi et al have introduced the Retrace algorithm which can make use of multi-step returns to estimate Q values. As I suspect, some of the improvements that comes from the bounds is due to the fact that multi-step returns is used effectively. Therefore, I was wondering whether the authors have tried any approach like Retrace or Tree backup by Precup et al. and if so how do these methods stack up against the proposed method.
The author have very impressive results and the paper proposes a very promising direction for future research and as a result I would like to make a few suggestions:
First, it would be great if the authors could include a discussion about deterministic vs stochastic MDPs.
Second, it would be great if the authors could include some kind of theoretically analysis about the approach.
Finally, I would like to apologize for the late review. | 9: Top 15% of accepted papers, strong accept | 3: The reviewer is fairly confident that the evaluation is correct | 9 | 3 |
SJ8uwSGVx | rJ8Je4clg | review | This paper proposes an improvement to the q-learning/DQN algorithm using constraint bounds on the q-function, which are implemented using quadratic penalties in practice. The proposed change is simple to implement and remarkably effective, enabling both significantly faster learning and better performance on the suite of Atari games.
I have a few suggestions for improving the paper:
The paper could be improved by including qualitative observations of the learning process with and without the proposed penalties, to better understand the scenarios in which this method is most useful, and to develop a better understanding of its empirical performance.
It would also be nice to include zoomed-out versions of the learned curves in Figure 3, as the DQN has yet to converge. Error bars would also be helpful to judge stability over different random seeds.
As mentioned in the paper, this method could be combined with D-DQN. It would be interesting to see this combination, to see if the two are complementary. Do you plan to do this in the final version?
Also, a couple questions:
- Do you think the performance of this method would continue to improve after 10M frames?
- Could the ideas in this paper be extended to methods for continuous control like DDPG or NAF? | 9: Top 15% of accepted papers, strong accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 9 | 4 |
S1nGIQ-Vl | By1snw5gl | O(mn)? | L-SR1 seems to have O(mn) time complexity. I miss this information in your paper.
Your experimental results suggest that L-SR1 does not outperform Adadelta (I suppose the same for Adam).
Given the time complexity of L-SR1, the x-axis showing time would suggest that L-SR1 is much (say, m times) slower.
"The memory size of 2 had the lowest minimum test loss over 90" suggests that the main driven force of L-SR1
was its momentum, i.e., the second-order information was rather useless. | 4: Ok but not good enough - rejection | 3: The reviewer is fairly confident that the evaluation is correct | 4 | 3 |
rk3f2SyVg | By1snw5gl | Address better optimization at saddle points with symmetric rank-one method which does not guarantee pos. def. update matrix, vs. BFGS approach. Investigating this optimization with limited memory version or SR1 | It is an interesting idea to go after saddle points in the optimization with an SR1 update and a good start in experiments, but missing important comparisons to recent 2nd order optimizations such as Adam, other Hessian free methods (Martens 2012), Pearlmutter fast exact multiplication by the Hessian. From the mnist/cifar curves it is not really showing an advantage to AdaDelta/Nag (although this is stated), and much more experimentation is needed to make a claim about mini-batch insensitivity to performance, can you show error rates on a larger scale task? | 5: Marginally below acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 5 | 3 |
SyNjWlG4x | By1snw5gl | Interesting work, but not ready to be published | The paper proposes a new second-order method L-SR1 to train deep neural networks. It is claimed that the method addresses two important optimization problems in this setting: poor conditioning of the Hessian and proliferation of saddle points. The method can be viewed as a concatenation of SR1 algorithm of Nocedal & Wright (2006) and limited-memory representations Byrd et al. (1994). First of all, I am missing a more formal, theoretical argument in this work (in general providing more intuition would be helpful too), which instead is provided in the works of Dauphin (2014) or Martens. The experimental section in not very convincing considering that the performance in terms of the wall-clock time is not reported and the advantage over some competitor methods is not very strong even in terms of epochs. I understand that the authors are optimizing their implementation still, but the question is: considering the experiments are not convincing, why would anybody bother to implement L-SR1 to train their deep models? The work is not ready to be published. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
B17yL74He | S1Y0td9ee | Poor performance on bioinformatics dataset? | the paper proposed a method mainly for graph classification. The proposal is to decompose graphs objects into hierarchies of small graphs followed by generating vector embeddings and aggregation using deep networks.
The approach is reasonable and intuitive however, experiments do not show superiority of their approach.
The proposed method outperforms Yanardag et al. 2015 and Niepert et al., 2016 on social networks graphs but are quite inferior to Niepert et al., 2016 on bio-informatics datasets. the authors did not report acccuracy for Yanardag et al. 2015 which on similar bio-ddatasets for example NCI1 is 80%, significantly better than achieved by the proposed method. The authors claim that their method is tailored for social networks graph more is not supported by good arguments? what models of graphs is this method more suitable? | 5: Marginally below acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 5 | 3 |
r1xXahBNl | S1Y0td9ee | Interesting approach, confusing presentation. | The paper contributes to recent work investigating how neural networks can be used on graph-structured data. As far as I can tell, the proposed approach is the following:
1. Construct a hierarchical set of "objects" within the graph. Each object consists of multiple "parts" from the set of objects in the level below. There are potentially different ways a part can be part of an object (the different \pi labels), which I would maybe call "membership types". In the experiments, the objects at the bottom level are vertices. At the next level they are radius 0 (just a vertex?) and radius 1 neighborhoods around each vertex, and the membership types here are either "root", or "element" (depending on whether a vertex is the center of the neighborhood or a neighbor). At the top level there is one object consisting of all of these neighborhoods, with membership types of "radius 0 neighborhood" (isn't this still just a vertex?) or "radius 1 neighborhood".
2. Every object has a representation. Each vertex's representation is a one-hot encoding of its degree. To construct an object's representation at the next level, the following scheme is employed:
a. For each object, sum the representation of all of its parts having the same membership type.
b. Concatenate the sums obtained from different membership types.
c. Pass this vector through a multi-layer neural net.
I've provided this summary mainly because the description in the paper itself is somewhat hard to follow, and relevant details are scattered throughout the text, so I'd like to verify that my understanding is correct.
Some additional questions I have that weren't clear from the text: how many layers and hidden units were used? What are the dimensionalities of the representations used at each layer? How is final classification performed? What is the motivation for the chosen "ego-graph" representation?
The proposed approach is interesting and novel, the compression technique appears effective, and the results seem compelling. However, the clarity and structure of the writing is quite poor. It took me a while to figure out what was going on---the initial description is provided without any illustrative examples, and it required jumping around the paper to figure for example how the \pi labels are actually used. Important details around network architecture aren't provided, and very little in the way of motivation is given for many of the choices made. Were other choices of decomposition/object-part structures investigated, given the generality of the shift-aggregate-extract formulation? What motivated the choice of "ego-graphs"? Why one-hot degrees for the initial attributes?
Overall, I think the paper contains a useful contribution on a technical level, but the presentation needs to be significantly cleaned up before I can recommend acceptance. | 5: Marginally below acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 5 | 3 |
SJP14kfEx | S1Y0td9ee | Might be something good here, but key details are missing. | Some of the key details in this paper are very poorly explained or not even explained at all. The model sounds interesting and there may be something good here, but it should not be published in it's current form.
Specific comments:
The description of the R_l,pi convolutions in Section 2.1 was unclear. Specifically, I wasn't confident that I understood what the labels pi represented.
The description of the SAEN structure in section 2.2 was worded poorly. My understanding, based on Equation 1, is that the 'shift' operation is simply a summation of the representations of the member objects, and that the 'aggregate' operation simply concatenates the representations from multiple relations. In the 'shift' step, it seems more appropriate to average over the object's member's representations h_j, rather than sum over them.
The compression technique presented in Section 2.3 requires that multiple objects at a level have the same representation. Why would this ever occur, given that the representations are real valued and high-dimensional? The text is unintelligible: "two objects are equivalent if they are made by same sets of parts for all the pi-parameterizations of the R_l,pi decomposition relation."
The 'ego graph patterns' in Figure 1 and 'Ego Graph Neural Network' used in the experiments are never explained in the text, and no references are given. Because of this, I cannot comment on the quality of the experiments. | 3: Clear rejection | 3 | -1 |
|
B1-0khZEl | Sy2fzU9gl | Very interesting results, but more details and more quantitative results are needed |
This paper proposes the beta-VAE, which is a reasonable but also straightforward generalization of the standard VAE. In particular, a weighting factor beta is added for the KL-divergence term to balance the likelihood and KL-divergence. Experimental results show that tuning this weighting factor is important for learning disentangled representations. A linear-classifier based protocol is proposed for measuring the quality of disentanglement. Impressive illustrations on manipulating latent variables are shown in the paper.
Learning disentangled representations without supervision is an important topic. Showing the effectiveness of VAE for this task is interesting. Generalizing VAE with a weighting factor is straightforward (though reformulating VAE is also interesting), the main contribution of this paper is on the empirical side.
The proposed protocol for measuring disentangling quality is reasonable. Establishing protocol is one important methodology contribution of this paper, but the presentation of Section 3 is still not good. Little motivation is provided at the beginning of Section 3. Figure 2 is a summary of the algorithm, which is helpful, but it still necessary to intuitively explain the motivation at the first place (e.g., what you expect if a factor is disentangled, and why the performance of a classifier can reflect such an expectation). Moreover, 1) z_diff appeared without any definition in the main text. 2) Use “decoding” for x~Sim(v,w) may make people confuse the ground truth sampling procedure w ith the trained decoder.
The illustrative figures on traversing the disentangled factor are impressive, though image generation quality is not as good as InfoGAN (not the main point of this paper). However, 1) it will be helpful to discuss if the good disentangling quality only attribute to the beta factor and VAE framework. For example, the training data in this paper seems to be densely sampled for the visualized factors. Does the sampling density play a critical role? 2) Not too many qualitative results are provided for each experiment? Adding more figures (e.g., in appendix) to cover more factors and seeding images can strength the conclusions drawn in this paper. 3) Another detailed question related to the generalizability of the model: are the seeding image for visualizing faces from unseen subjects or subjects in the training set? (maybe I missed something here.)
Quantitative results are only presented for the synthesized 2D shape. What hinders this paper from reporting quantitative numbers on real data (e.g., the 2D and 3D face data)? One possible reason is that not all factors can be disentangled for real data, but it is still feasible to pick up some well-defined factor to measure the quantitative performance.
Quantitative performance is only measured by the proposed protocol. Since the effectiveness of the protocol is something the paper need to justify, reporting quantitative results using simpler protocol is helpful both for demonstrating the disentangling quality and for justifying the proposed protocol (consistency with other measurement). A simple experiment is facial identity recognition and pose estimation using disentangled features on a standard test set (like in Reed et al, ICML 2014).
In Figure 6 (left), why ICA is worse than PCA for disentanglement? Is it due to the limitation of the ICA algorithm or some other reasons?
In Figure 6 (right), what is “factor change accuracy”? According to Appendix A.4 (which is not referred to in the main text), it is the “Disentanglement metric score”. Is that right?
If so Figure 6 (right) shows the reconstruction results for the best disentanglement metric score. Then, 1) how about random generation or traversing along a disentangled factor? 2) more importantly, how is the reconstruction/generation results when the disentanglement metric score is suboptimal.
Overall, the results presented in this paper are very interesting, but there are many details to be clarified. Moreover, more quantitative results are also needed. I hope at least some of the above concerns can be addressed.
| 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
H16z7IT4l | Sy2fzU9gl | The paper proposes beta-VAE which strengthen the KL divergence between the recognition model and the prior to limit the capacity of latent variables while sacrificing the reconstruction error. This allows the VAE model to learn more disentangled representation.
The main concern is that the paper didn't present any quantitative result on log likelihood estimation. On the quality of generated samples, although the beta-VAE learns disentangled representation, the generated samples are not as realistic as those based on generative adversarial network, e.g., InfoGAN. Beta-VAE learns some interpretable factors of variation, but it still remains unclear why it is a better (or more efficient) representation than that of standard VAE.
In experiment, what is the criteria for cross-validation on hyperparameter \beta?
There also exists other ways to limit the capacity of the model. The simplest way is to reduce the latent variable dimension. I am wondering how the proposed beta-VAE is a better model than the VAE with reduced, or optimal latent variable dimension.
| 5: Marginally below acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5 | 4 |
|
HyRZoSLVe | Sy2fzU9gl | Simple and effective | Summary
===
This paper presents Beta-VAE, an augmented Variational Auto-Encoder which
learns disentangled representations. The VAE objective is derived
as an approximate relaxation of a constrained optimization problem where
the constraint matches the latent code of the encoder to a prior.
When KKT multiplier beta on this constraint is set to 1 the result is the
original VAE objective, but when beta > 1 we obtain Beta-VAE, which simply
increases the penalty on the KL divergence term. This encourages the model to
learn a more efficient representation because the capacity of the latent
representation is more limited by beta. The distribution of the latent
representation is rewarded more when factors are independent because
the prior (an isotropic Gaussian) encourages independent factors, so the
representation should also be disentangled.
A new metric is proposed to evaluate the degree of disentanglement. Given
a setting in which some disentangled latent factors are known, many examples
are generated which differ in all of these factors except one. These examples
are encoded into the learned latent representation and a simple classifier
is used to predict which latent factor was kept constant. If the learned
representation does not disentangle the constant factor then the classifier
will more easily confuse factors and its accuracy will be lower. This
accuracy is the final number reported.
A synthetic dataset of 2D shapes with known latent factors is created to
test the proposed metric and Beta-VAE outperforms a number of baselines
(notably InfoGAN and the semi-supervised DC-IGN).
Qualitative results show that Beta-VAE learns disentangled factors
on the 3D chairs dataset, a dataset of 3D faces, and the celebA dataset
of face images. The effect of varying Beta is also evaluated using the proposed
metric and the latent factors learned on the 2D shapes dataset are explored
in detail.
Strengths
===
* Beta-VAE is simple and effective.
* The proposed metric is a novel way of testing whether ground truth factors
of variation have been identified.
* There is extensive comparison to relevant baselines.
Weaknesses
===
* Section 3 describes the proposed disentanglement metric, however I feel
I need to read the caption of the associated figure (I thank for adding
that) and Appendix 4 to understand the metric intuitively or in detail.
It would be easier to read this section if a clear intuition preceeded
a detailed description and I think more space should be devoted to this
in the paper.
* Appendix 4: Why was the bottom 50% of the resulting scores discarded?
* As indicated in pre-review comments, the disentanglement metric is similar
to a measure of correlation between latent features. Could the proposed metric
be compared to a direct measure of cross-correlation between latent factors
estimated over the 2D shapes dataset?
* The end of section 4.2 observes that high beta values result in low
disentanglement, which suggests the most efficient representation is not
disentangled. This seems to disagree with the intuition from the approach
section that more efficient representations should be disentangled. It would
be nice to see discussion of potential reasons for this disagreement.
* The writing is somewhat dense.
Overall Evaluation
===
The core idea is novel, simple and extensive tests show that it is effective.
The proposed evaluation metric is novel might come into broader use.
The main downside to the current version of this paper is the presentation,
which provides sufficient detail but could be more clear. | 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
Hyq3zhbVg | SJg498clg | Review | The paper proposes a model that aims at learning to label nodes of graph in a semi-supervised setting. The idea of the model is based on the use of the graph structure to regularize the representations learned at the node levels. Experimental results are provided on different tasks
The underlying idea of this paper (graph regularization) has been already explored in different papers – e.g 'Learning latent representations of nodes for classifying in heterogeneous social networks' [Jacob et al. 2014], [Weston et al 2012] where a real graph structure is used instead of a built one. The experiments lack of strong comparisons with other graph models (e.g Iterative Classification, 'Learning from labeled and unlabeled data on a directed graph', ...). So the novelty of the paper and the experimental protocol are not strong enough to accpet the paper.
Pros:
* Learning over graph is an important topic
Cons:
* Many existing approaches have already exploited the same types of ideas, resulting in very close models
* Lack of comparison w.r.t existing models
| 3: Clear rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3 | 4 |
SkitQvmNl | SJg498clg | Very similar to previous work, rebranded. | The authors introduce a semi-supervised method for neural networks, inspired from label propagation.
The method appears to be exactly the same than the one proposed in (Weston et al, 2008) (the authors cite the 2012 paper). The optimized objective function in eq (4) is exactly the same than eq (9) in (Weston et al, 2008).
As possible novelty, the authors propose to use the adjacency matrix as input to the neural network, when there are no other features, and show success on the BlogCatalog dataset.
Experiments on text classification use neighbors according to word2vec average embedding to build the adjacency matrix. Top reported accuracies are not convincing compared to (Zhang et al, 2015) reported performance. Last experiment is on semantic intent classification, which a custom dataset; neighbors are also found according to a word2vec metric.
In summary, the paper propose few applications to the original (Weston et al, 2008) paper. It rebrands the algorithm under a new name, and does not bring any scientific novelty, and the experimental section lacks existing baselines to be convincing. | 3: Clear rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3 | 4 |
BJofT1mNg | SJg498clg | Very similar to previous work. | This paper proposes the Neural Graph Machine that adds in graph regularization on neural network hidden representations to improve network learning and take the graph structure into account. The proposed model, however, is almost identical to that of Weston et al. 2012.
As the authors have clarified in the answers to the questions, there are a few new things that previous work did not do:
1. they showed that graph augmented training for a range of different types of networks, including FF, CNN, RNNs etc. and works on a range of problems.
2. graphs help to train better networks, e.g. 3 layer CNN with graphs does as well as than 9 layer CNNs
3. graph augmented training works on a variety of different kinds of graphs.
However, all these points mentioned above seems to simply be different applications of the graph augmented training idea, and observations made during the applications. I think it is therefore not proper to call the proposed model a novel model with a new name Neural Graph Machine, but rather making it clear in the paper that this is an empirical study of the model proposed by Weston et al. 2012 to different problems would be more acceptable. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
HJsxV1GVx | B16dGcqlx | Interesting idea for imitation learning. Paper could have been more general. | The paper presents an interesting new problem setup for imitation learning: an agent tries to imitate a trajectory demonstrated by an expert but said trajectory is demonstrated in a different state or observation space than the one accessible by the agent (although the dynamics of the underlying MDP are shared). The paper proposes a solution strategy that combines recent work on domain confusion losses with a recent IRL method based on generative adversarial networks.
I believe the general problem to be relevant and agree with the authors that it results in a more natural formulation for imitation learning that might be more widely applicable.
There are however a few issues with the paper in its current state that make the paper fall short of being a great exploration of a novel idea. I will list these concerns in the following (in arbitrary order)
- The paper feels at times to be a bit hurriedly written (this also mainly manifests itself in the experiments, see comment below) and makes a few fairly strong claims in the introduction that in my opinion are not backed up by their approach. For example: "Advancements in this class of algorithms would significantly improve the state of robotics, because it will enable anyone to easily teach robots new skills"; given that the current method to my understanding has the same issues that come with standard GAN training (e.g. instability etc.) and requires a very accurate simulator to work well (since TRPO will require a large number of simulated trajectories in each step) this seems like an overstatement.
There are some sentences that are ungrammatical or switch tense in the middle of the sentence making the paper harder to read than necessary, e.g. Page 2: "we find that this simple approach has been able to solve the problems"
- The general idea of third person imitation learning is nice, clear and (at least to my understanding) also novel. However, instead of exploring how to generally adapt current IRL algorithms to this setting the authors pick a specific approach that they find promising (using GANs for IRL) and extend it. A significant amount of time is then spent on explaining why current IRL algorithms will fail in the third-person setting. I fail to see why the situation for the GAN based approach is any different than that of any other existing IRL algorithm. To be more clear: I see no reason why e.g. behavioral cloning could not be extended with a domain confusion loss in exactly the same way as the approach presented. To this end it would have been nice to rather discuss which algorithms can be adapted in the same way (and also test them) and which ones cannot. One straightforward approach to apply any IRL algorithm would for example be to train two autoencoders for both domains that share higher layers + a domain confusion loss on the highest layer, should that not result in features that are directly usable? If not, why?
- While the general argument that existing IRL algorithms will fail in the proposed setting seems reasonable it is still unfortunate that no attempts have been made to validate this empirically. No comparison is made regarding what happens when one e.g. performs supervised learning (behavioral cloning) using the expert observations and then transfers to the changed domain. How well would this work in practice ? Also, how fast can different IRL algorithms solve the target task in general (assuming a first person perspective) ?
- Although I like the idea of presenting the experiments as being directed towards answering a specific set of questions I feel like the posed questions somewhat distract from the main theme of the paper. Question 2 suddenly makes the use of additional velocity information to be a main point of importance and the experiments regarding Question 3 in the end only contain evaluations regarding two hyperparameters (ignoring all other parameters such as the parameters for TRPO, the number of rollouts per iteration, the number of presented expert episodes and the design choices for the GAN). I understand that not all of these can be evaluated thoroughly in a conference paper but I feel like some more experiments or at least some discussion would have helped here.
- The presented experimental evaluation somewhat hides the cost of TRPO training with the obtained reward function. How many roll-outs are necessary in each step?
- The experiments lack some details: How are the expert trajectories obtained? The domains for the pendulum experiment seem identical except for coloring of the pole, is that correct (I am surprised this small change seems to have such a detrimental effect)? Figure 3 shows average performance over 5 trials, what about Figure 5 (if this is also average performance, what is the variance here)? Given that GANs are not easy to train, how often does the training fail/were you able to re-use the hyperparameters across all experiments?
UPDATE:
I updated the score. Please see my response to the rebuttal below.
| 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
SJezwxzEg | B16dGcqlx | Interesting idea but need more experiments | This paper proposed a novel adversarial framework to train a model from demonstrations in a third-person perspective, to perform the task in the first-person view. Here the adversarial training is used to extract a novice-expert (or third-person/first-person) independent feature so that the agent can use to perform the same policy in a different view point.
While the idea is quite elegant and novel (I enjoy reading it), more experiments are needed to justify the approach. Probably the most important issue is that there is no baseline, e.g., what if we train the model with the image from the same viewpoint? It should be better than the proposed approach but how close are they? How the performance changes when we gradually change the viewpoint from third-person to first-person? Another important question is that maybe the network just blindly remembers the policy, in this case, the extracted feature could be artifacts of the input image that implicitly counts the time tick in some way (and thus domain-agonistic), but can still perform reasonable policy. Since the experiments are conduct in a synthetic environment, this might happen. An easy check is to run the algorithm on multiple viewpoint and/or with blurred/differently rendered images, and/or with random initial conditions.
Other ablation analysis is also needed. For example, I am not fully convinced by the gradient flipping trick used in Eqn. 5, and in the experiments there is no ablation analysis for that (GAN/EM style training versus gradient flipping trick). For the experiments, Fig. 4,5,6 does not have error bars and is not very convincing. | 5: Marginally below acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 5 | 3 |
B1uj8o-Ee | B16dGcqlx | Review | The paper extends the imitation learning paradigm to the case where the demonstrator and learner have different points of view. This is an important contribution, with several good applications. The main insight is to use adversarial training to learn a policy that is robust to this difference in perspective. This problem formulation is quite novel compared to the standard imitation learning literature (usually first-order perspective), though has close links to the literature on transfer learning (as explained in Sec.2).
The basic approach is clearly explained, and follows quite readily from recent literature on imitation learning and adversarial training.
I would have expected to see comparison to the following methods added to Figure 3:
1) Standard 1st person imitation learning using agent A data, and apply the policy on agent A. This is an upper-bound on how well you can expect to do, since you have the correct perspective.
2) Standard 1st person imitation learning using agent A data, then apply the policy on agent B. Here, I expect it might do less well than 3rd person learning, but worth checking to be sure, and showing what is the gap in performance.
3) Reinforcement learning using agent A data, and apply the policy on agent A. I expect this might do better than 3rd person imitation learning but it might depend on the scenario (e.g. difficulty of imitation vs exploration; how different are the points of view between the agents). I understand this is how the expert data is collected for the demonstrator, but I don’t see the performance results from just using this procedure on the learner (to compare to Fig.3 results).
Including these results would in my view significantly enhance the impact of the paper. | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
S1Jpha-Vl | HysBZSqlx | This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them.
Reward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when?
“rivalry” training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don’t think that you really invented “a new method to train an agent by enabling it to train against several opponents” nor “a new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI”). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite.
Your definition of Q-function (“predicts the score at the end of the game given the current state and selected action”) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy).
Minor:
* Eq (1): the Q-net inside the max() is the target network, with different parameters theta’
* the Du et al. reference is missing the year
* some of the other references should point at the corresponding published papers instead of the arxiv versions | 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
|
H1f6QHHVl | HysBZSqlx | Final review: Nice software contribution, expected more significant scientific contributions | The paper presents a new environment, called Retro Learning Environment (RLE), for reinforcement learning. The authors focus on Super Nintendo but claim that the interface supports many others (including ALE). Benchmark results are given for standard algorithms in 5 new Super Nintendo games, and some results using a new "rivalry metric".
These environments (or, more generally, standardized evaluation methods like public data sets, competitions, etc.) have a long history of improving the quality of AI and machine learning research. One example in the past few years was the Atari Learning Environment (ALE) which has now turned into a standard benchmark for comparison of algorithms and results. In this sense, the RLE could be a worthy contribution to the field by encouraging new challenging domains for research.
That said, the main focus of this paper is presenting this new framework and showcasing the importance of new challenging domains. The results of experiments themselves are for existing algorithms. There are some new results that show reward shaping and policy shaping (having a bias toward going right in Super Mario) help during learning. And, yes, domain knowledge helps, but this is obvious. The rivalry training is an interesting idea, when training against a different opponent, the learner overfits to that opponent and forgets to play against the in-game AI; but then oddly, it gets evaluated on how well it does against the in-game AI!
Also the part of the paper that describes the scientific results (especially the rivalry training) is less polished, so this is disappointing. In the end, I'm not very excited about this paper.
I was hoping for a more significant scientific contribution to accompany in this new environment. It's not clear if this is necessary for publication, but also it's not clear that ICLR is the right venue for this work due to the contribution being mainly about the new code (for example, mloss.org could be a better 'venue', JMLR has an associated journal track for accompanying papers: http://www.jmlr.org/mloss/)
--- Post response:
Thank you for the clarifications. Ultimately I have not changed my opinion on the paper. Though I do think RLE could have a nice impact long-term, there is little new science in this paper, ad it's either too straight-forward (reward shaping, policy-shaping) or not quite developed enough (rivalry training). | 5: Marginally below acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5 | 4 |
Sy3UiUz4l | HysBZSqlx | Ok but limited contributions | This paper introduces a new reinforcement learning environment called « The Retro Learning Environment”, that interfaces with the open-source LibRetro API to offer access to various emulators and associated games (i.e. similar to the Atari 2600 Arcade Learning Environment, but more generic). The first supported platform is the SNES, with 5 games (more consoles and games may be added later). Authors argue that SNES games pose more challenges than Atari’s (due to more complex graphics, AI and game mechanics). Several DQN variants are evaluated in experiments, and it is also proposed to compare learning algorihms by letting them compete against each other in multiplayer games.
I like the idea of going toward more complex games than those found on Atari 2600, and having an environment where new consoles and games can easily be added sounds promising. With OpenAI Universe and DeepMind Lab that just came out, though, I am not sure we really need another one right now. Especially since using ROMs of emulated games we do not own is technically illegal: it looks like this did not cause too much trouble for Atari but it might start raising eyebrows if the community moves to more advanced and recent games, especially some Nintendo still makes money from.
Besides the introduction of the environment, it is good to have DQN benchmarks on five games, but this does not add a lot of value. The authors also mention as contribution "A new benchmarking technique, allowing algorithms to compete against each other, rather than playing against the in-game AI", but this seems a bit exaggerated to me: the idea of pitting AIs against each other has been at the core of many AI competitions for decades, so it is hardly something new. The finding that reinforcement learning algorithms tend to specialize to their opponent is also not particular surprising.
Overall I believe this is an ok paper but I do not feel it brings enough to the table for a major conference. This does not mean, however, that this new environment won't find a spot in the (now somewhat crowded) space of game-playing frameworks.
Other small comments:
- There are lots of typos (way too many to mention them all)
- It is said that Infinite Mario "still serves as a benchmark platform", however as far as I know it had to be shutdown due to Nintendo not being too happy about it
- "RLE requires an emulator and a computer version of the console game (ROM file) upon initialization rather than a ROM file only. The emulators are provided with RLE" => how is that different from ALE that requires the emulator Stella which is also provided with ALE?
- Why is there no DQN / DDDQN result on Super Mario?
- It is not clear if Figure 2 displays the F-Zero results using reward shaping or not
- The Du et al reference seems incomplete | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
HJf3GfM4e | rkE3y85ee | Review: Categorical Reparameterization with Gumbel-Softmax | The authors propose a method for reparameterization gradients with categorical distributions. This is done by using the Gumbel-Softmax distribution, a smoothened version of the Gumbel-Max trick for sampling from a multinomial.
The paper is well-written and clear. The application to the semi-supervised model in Kingma et al. (2014) makes sense for large classes, as well as its application to general stochastic computation graphs (Schulman et al., 2015).
One disconcerting point is that (from my understanding at least), this does not actually perform variational inference for discrete latent variable models. Rather, it changes the probability model itself and performs approximate inference on the modified (continuous relaxed) version of the model. This is fine in practice given that it's all approximate inference, but unlike previous variational inference advances either in more expressive approximations or faster computation (as noted by the different gradient estimators they compare to), the probability model is fundamentally changed.
Two critical points seem key: the sensitivity of the temperature, and whether this applies for non-one hot encodings of the categorical distribution (and thus sufficiently scale to high dimensions). Comments by the authors on this are welcome.
There is a related work by Rolfe (2016) on discrete VAEs, who also consider a continuous relaxed approach. This is worth citing and comparing to (or at least mentioning) in the paper.
References
Rolfe, J. T. (2016). Discrete Variational Autoencoders. arXiv.org. | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
SJ1R_ieEg | rkE3y85ee | The paper is well written but the novelty of the paper is less clear | The paper combines Gumbel distribution with the popular softmax function to obtain a continuous distribution on the simplex that can approximate categorical samples. It is not surprising that Gumbel softmax outperforms other single sample gradient estimators. However, I am curious about how Gumbel compares with Dirichlet experimentally.
The computational efficiency of the estimator when training semi-supervised models is nice. However, the advantage will be greater when the number of classes are huge, which doesn't seem to be the case in a simple dataset like MNIST. I am wondering why the experiments are not done on a richer dataset.
The presentation of the paper is neat and clean. The experiments settings are clearly explained and the analysis appears to be complete.
The only concern I have is the novelty of this work. I consider this work as a nice but may be incremental (relatively small) contribution to our community. | 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
Sk0G5NVEg | rkE3y85ee | Interesting idea, encouraging results | This paper introduces a continuous relaxation of categorical distribution, namely the the Gumbel-Softmax distribution, such that generative models with categorical random variables can be trained using reparameterization (path-derivative) gradients. The method is shown to improve upon other methods in terms of the achieved log-likelihoods of the resulting models. The main contribution, namely the method itself, is simple yet nontrivial and worth publishing, and seems effective in experiments. The paper is well-written, and I applaud the details provided in the appendix. The main application seems to be semi-supervised situations where you really want categorical variables.
- P1: "differentiable sampling mechanism for softmax". "sampling" => "approximate sampling", since it's technically sampling from the Gumbal-softmax.
- P3: "backpropagtion"
- Section 4.1: Interesting experiments.
- It would be interesting to report whether there is any discrepancy between the relaxed and non-relaxed models in terms of log-likelihood. Currently, only the likelihoods under the non-relaxed models are reported.
- It is slightly discouraging that the temperature (a nuisance parameter) is used differently across experiments. It would be nice to give more details on whether you were succesful in learning the temperature, instead of annealing it; it would be interesting if that hyper-parameter could be eliminated. | 7: Good paper, accept | 7 | -1 |
|
HJKt06-Ng | HyEeMu_xx | Review | The paper presents an architecture to incrementally attend to image regions - at multiple layers of a deep CNN. In contrast to most other models, the model does not apply a weighted average pooling in the earlier layers of the network but only in the last layer. Instead, the features are reweighted in each layer with the predicted attention.
1. Contribution of approach: The approach to use attention in this way is to my knowledge novel and interesting.
2. Qualitative results:
2.1. I like the large number of qualitative results; however, I would have wished the focus would have been less on the “number” dataset and more on the Visual Genome dataset.
2.2. The qualitative results for the Genome dataset unfortunately does not provide the predicted attributes. It would be interesting to see e.g. the highest predicted attributes for a given query. So far the results only show the intermediate results.
3. Qualitative results:
3.1. The paper presents results on two datasets, one simulated dataset as well as Visual Genome. On both it shows moderate but significant improvements over related approaches.
3.2. For the visual genome dataset, it would be interesting to include a quantitative evaluation how good the localization performance is of the attention approach.
3.3. It would be interesting to get a more detailed understanding of the model by providing results for different CNN layers where the attention is applied.
4. It would be interesting to see results on more established tasks, e.g. VQA, where the model should similarly apply. In fact, the task on the numbers seems to be identical to the VQA task (input/output), so most/all state-of-the-art VQA approaches should be applicable.
Other (minor/discussion points)
- Something seems wrong in the last two columns in Figure 11: the query “7” is blue not green. Either the query or the answer seem wrong.
- Section 3: “In each layer, the each attended feature map” -> “In each layer, each attended feature map”
- I think Appendix A would be clearer if it would be stated that is the attention mechanism used in SAN and which work it is based on.
Summary:
While the experimental evaluation could be improved with more detailed evaluation, comparisons, and qualitative results, the presented evaluation is sufficient to validate the approach. The approach itself is novel and interesting to my knowledge and speaks for acceptance.
| 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
SynYYsrNe | HyEeMu_xx | This paper proposes an attention mechanism which is essentially a gating on every spatial feature. Though they claim novelty through the attention being progressive, progressive attention has been done before [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections], and the element-wise multiplicative gates are very similar to convolutional LSTMs and Highway Nets. There is a lack of novelty and no significant results.
Pros:
- The idea of progressive attention on features is good, but has been done in [Spatial Transformer Networks, Deep Networks with Internal Selective Attention through Feedback Connections]
- Good visualisations.
Cons:
- No progressive baselines were evaluated, e.g. STN and HAN at every layer acting on featuremaps.
- Not clear how the query is fed into the localisation networks of baselines.
- The difference in performance between author-made synthetic data and the Visual Genome datasets between baselines and PAN is very different. Why is this? There is no significant performance gain on any standard datasets.
- No real novelty. | 4: Ok but not good enough - rejection | 4 | -1 |
||
SyYWBfzNl | HyEeMu_xx | Good paper, but would help to have experiments on a more benchmarked dataset | This paper presents a hierarchical attention model that uses multiple stacked layers of soft attention in a convnet. The authors provide results on a synthetic dataset in addition to doing attribute prediction on the Visual Genome dataset.
Overall I think this is a well executed paper, with good experimental results and nice qualitative visualizations. The main thing I believe it is missing would be experiments on a dataset like VQA which would help better place the significance of this work in context of other approaches.
An important missing citation is Graves 2013 which had an early version of the attention model.
Minor typo:
"It confins possible attributes.." -> It confines..
"ImageNet (Deng et al., 2009), is used, and three additional" -> ".., are used," | 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
S1Y403RQe | SyVVJ85lg | Final review: Sound paper but a very simple model, few experiments at start but more added. | In PALEO the authors propose a simple model of execution of deep neural networks. It turns out that even this simple model allows to quite accurately predict the computation time for image recognition networks both in single-machine and distributed settings.
The ability to predict network running time is very useful, and the paper shows that even a simple model does it reasonably, which is a strength. But the tests are only performed on a few networks of very similar type (AlexNet, Inception, NiN) and only in a few settings. Much broader experiments, including a variety of models (RNNs, fully connected, adversarial, etc.) in a variety of settings (different batch sizes, layer sizes, node placement on devices, etc.) would probably reveal weaknesses of the proposed very simplified model. This is why this reviewer considers this paper borderline -- it's a first step, but a very basic one and without sufficiently large experimental underpinning.
More experiments were added, so I'm updating my score. | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
H1GUJz-Ne | SyVVJ85lg | This paper introduces an analytical performance model to estimate the training and evaluation time of a given network for different software, hardware and communication strategies.
The paper is very clear. The authors included many freedoms in the variables while calculating the run-time of a network such as the number of workers, bandwidth, platform, and parallelization strategy. Their results are consistent with the reported results from literature.
Furthermore, their code is open-source and the live demo is looking good.
The authors mentioned in their comment that they will allow users to upload customized networks and model splits in the coming releases of the interface, then the tool can become very useful.
It would be interesting to see some newer network architectures with skip connections such as ResNet, and DenseNet.
| 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
|
SyzvzN7Qx | SyVVJ85lg | Technically sound. Only useful under the assumption that the code is released. | This paper is technically sound. It highlights well the strengths and weaknesses of the proposed simplified model.
In terms of impact, its novelty is limited, in the sense that the authors did seemingly the right thing and obtained the expected outcomes. The idea of modeling deep learning computation is not in itself particularly novel. As a companion paper to an open source release of the model, it would meet my bar of acceptance in the same vein as a paper describing a novel dataset, which might not provide groundbreaking insights, yet be generally useful to the community.
In the absence of released code, even if the authors promise to release it soon, I am more ambivalent, since that's where all the value lies. It would also be a different story if the authors had been able to use this framework to make novel architectural decisions that improved training scalability in some way, and incorporated such new insights in the paper.
UPDATED: code is now available. Revised review accordingly. | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
r1bVaaUNx | rJY0-Kcll | An interesting work to understand gradient descent as recurrent process | This paper describes a new approach to meta learning by interpreting the SGD update rule as gated recurrent model with trainable parameters. The idea is original and important for research related to transfer learning. The paper has a clear structure, but clarity could be improved at some points.
Pros:
- An interesting and feasible approach to meta-learning
- Competitive results and proper comparison to state-of-the-art
- Good recommendations for practical systems
Cons:
- The analogy would be closer to GRUs than LSTMs
- The description of the data separation in meta sets is hard to follow and could be visualized
- The experimental evaluation is only partly satisfying, especially the effect of the parameters of i_t and f_t would be of interest
- Fig 2 doesn't have much value
Remarks:
- Small typo in 3.2: "This means each coordinate has it" -> its
> We plan on releasing the code used in our evaluation experiments.
This would certainly be a major plus. | 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
SyiRxi7El | rJY0-Kcll | Strong paper but presentation unclear at times | In light of the authors' responsiveness and the updates to the manuscript -- in particular to clarify the meta-learning task -- I am updating my score to an 8.
-----
This manuscript proposes to tackle few-shot learning with neural networks by leveraging meta-learning, a classic idea that has seen a renaissance in the last 12 months. The authors formulate few-shot learning as a sequential meta-learning problem: each "example" includes a sequence of batches of "training" pairs, followed by a final "test" batch. The inputs at each "step" include the outputs of a "base learner" (e.g., training loss and gradients), as well as the base learner's current state (parameters). The paper applies an LSTM to this meta-learning problem, using the inner memory cells in the *second* layer to directly model the updated parameters of the base learner. In doing this, they note similarities between the respective update rules of LSTM memory cells and gradient descent. Updates to the LSTM meta-learner are computed based on the base learner's prediction loss for the final "test" batch. The authors make several simplifying assumptions, such as sharing weights across all second layer cells (analogous to using the same learning rate for all parameters). The paper recreates the Mini-ImageNet data set proposed in Vinyals et al 2016, and shows that the meta-learner LSTM is competitive with the current state-of-the-art (Matchin Networks, Vinyals 2016) on 1- and 5-shot learning.
Strengths:
- It is intriguing -- and in hindsight, natural -- to cast the few-shot learning problem as a sequential (meta-)learning problem. While the authors did not originate the general idea of persisting learning across a series of learning problems, I think it is fair to say that they have advanced the state of the art, though I cannot confidently assert its novelty as I am not deeply familiar with recent work on meta-learning.
- The proposed approach is competitive with and outperforms Vinyals 2016 in 1-shot and 5-shot Mini-ImageNet experiments.
- The base learner in this setting (simple ConvNet classifier) is quite different from the nearest-neighbor-on-top-of-learned-embedding approach used in Vinyals 2016. It is always exciting when state-of-the-art results can be reported using very different approaches, rather than incremental follow-up work.
- As far as I know, the insight about the relationship between the memory cell and gradient descent updates is novel here. It is interesting regardless.
- The paper offers several practical insights about how to design and train an LSTM meta-learner, which should make it easier for others to replicate this work and apply these ideas to new problems. These include proper initialization, weight sharing across coordinates, and the importance of normalizing/rescaling the loss, gradient, and parameter inputs. Some of the insights have been previously described (the importance of simulating test conditions during meta-training; assuming independence between meta-learner and base learner parameters when taking gradients with respect to the meta-learner parameters), but the discussion here is useful nonetheless.
Weaknesses:
- The writing is at times quite opaque. While it describes very interesting work, I would not call the paper an enjoyable read. It took me multiple passes (as well as consulting related work) to understand the general learning problem. The task description in Section 2 (Page 2) is very abstract and uses notation and language that is not common outside of this sub-area. The paper could benefit from a brief concrete example (based on MNIST is fine), perhaps paired with a diagram illustrating a sequence of few-shot learning tasks. This would definitely make it accessible to a wider audience.
- Following up on that note, the precise nature of the N-class, few-shot learning problem here is unclear to me. Specifically, the Mini-ImageNet data set has 100 labels, of which 64/16/20 are used during meta-training/validation/testing. Does this mean that only 64/100 classes are observed through meta-training? Or does it mean that only 64/100 are observed in each batch, but on average all 100 are observed during meta-training? If it's the former, how many outputs does the softmax layer of the ConvNet base learner have during meta-training? 64 (only those observed in training) or 100 (of which 36 are never observed)? Many other details like these are unclear (see question).
- The plots in Figure 2 are pretty uninformative in and of themselves, and the discussion section offers very little insight around them.
This is an interesting paper with convincing results. It seems like a fairly clear accept, but the presentation of the ideas and work therein could be improved. I will definitely raise my score if the writing is improved. | 8: Top 50% of accepted papers, clear accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 8 | 4 |
BJPokH_Vg | rJY0-Kcll | nice paper | This work presents an LSTM based meta-learning framework to learn the optimization algorithm of a another learning algorithm (here a NN).
The paper is globally well written and the presentation of the main material is clear. The crux of the paper: drawing the parallel between Robbins Monroe update rule and the LSTM update rule and exploit it to satisfy the two main desiderata of few shot learning (1- quick acquisition of new knowledge, 2- slower extraction of general transferable knowledge) is intriguing.
Several tricks re-used from (Andrychowicz et al. 2016) such as parameter sharing and normalization, and novel design choices (specific implementation of batch normalization) are well motivated.
The experiments are convincing. This is a strong paper. My only concerns/questions are the following:
1. Can it be redundant to use the loss, gradient and parameters as input to the meta-learner? Did you do ablative studies to make sure simpler combinations are not enough.
2. It would be great if other architectural components of the network can be learned in a similar fashion (number of neurons, type of units, etc.). Do you have an opinion about this?
3. The related work section (mainly focused on meta learning) is a bit shallow. Meta-learning is a rather old topic and similar approaches have been tried to solve the same problem even if they were not using LSTMs:
- Samy Bengio PhD thesis (1989) is all about this ;-)
- Use of genetic programming for the search of a new learning rule for neural networks (S. Bengio, Y. Bengio, and J. Cloutier. 1994)
- I am convince Schmidhuber has done something, make sure you find it and update related work section.
Overall, I like the paper. I believe the discussed material is relevant to a wide audience at ICLR.
| 9: Top 15% of accepted papers, strong accept | 9 | -1 |
|
BJF0H7M4g | rkEFLFqee | well-executed but limited novelty and impact | This paper introduces an approach for future frame prediction in videos by decoupling motion and content to be encoded separately, and additionally using multi-scale residual connections. Qualitative and quantitative results are shown on KTH, Weizmann, and UCF-101 datasets.
The idea of decoupling motion and content is interesting, and seems to work well for this task. However, the novelty is relatively incremental given previous cited work on multi-stream networks, and it is not clear that this particular decoupling works well or is of broader interest beyond the specific task of future frame prediction.
While results on KTH and Weizmann are convincing and significantly outperform baselines, the results are less impressive on less constrained UCF-101 dataset. The qualitative examples for UCF-101 are not convincing, as discussed in the pre-review question.
Overall this is a well-executed work with an interesting though not extremely novel idea. Given the limited novelty of decoupling motion and content and impact beyond the specific application, the paper would be strengthened if this could be shown to be of broader interest e.g. for other video tasks. | 7: Good paper, accept | 7 | -1 |
|
HkUoXJW4e | rkEFLFqee | Interesting architecture for an important problem, but requires additional experiments. | 1) Summary
This paper investigates the usefulness of decoupling appearance and motion information for the problem of future frame prediction in natural videos. The method introduces a novel two-stream encoder-decoder architecture, MCNet, consisting of two separate encoders -- a convnet on single frames and a convnet+LSTM on sequences of temporal differences -- followed by combination layers (stacking + convolutions) and a deconvolutional network decoder leveraging also residual connections from the two encoders. The architecture is trained end-to-end using the objective and adversarial training strategy of Mathieu et al.
2) Contributions
+ The architecture seems novel and is well motivated. It is also somewhat related to the two-stream networks of Simonyan & Zisserman, which are very effective for real-world action recognition.
+ The qualitative results are numerous, insightful, and very convincing (including quantitatively) on KTH & Weizmann, showing the benefits of decoupling content and motion for simple scenes with periodic motions, as well as the need for residual connections.
3) Suggestions for improvement
Static dataset bias:
In response to the pre-review concerns about the observed static nature of the qualitative results, the authors added a simple baseline consisting in copying the pixels of the last observed frame. On the one hand, the updated experiments on KTH confirm the good results of the method in these conditions. On the other hand, the fact that this baseline is better than all other methods (not just the authors's) on UCF101 casts some doubts on whether reporting average statistics on UCF101 is insightful enough. Although the authors provide some qualitative analysis pertaining to the quantity of motion, further quantitative analysis seems necessary to validate the performance of this and other methods on future frame prediction. At least, the results on UCF101 should be disambiguated with respect to the type of scene, for instance by measuring the overall quantity of motion (e.g., l2 norm of time differences) and reporting PSNR and SSIM per quartile / decile. Ideally, other realistic datasets than UCF101 should be considered in complement. For instance, the Hollywood 2 dataset of Marszalek et al would be a good candidate, as it focuses on movies and often contains complex actor, camera, and background motions that would make the "pixel-copying" baseline very poor. Experiments on video datasets beyond actions, like the KITTI tracking benchmark, would also greatly improve the paper.
Additional recognition experiments:
As mentioned in pre-review questions, further UCF-101 experiments on action recognition tasks by fine-tuning would also greatly improve the paper. Classifying videos indeed requires learning both appearance and motion features, and the two-stream encoder + combination layers of the MCNet+Res architecture seem particularly adapted, if they indeed allowed for unsupervised pre-trainining of content and motion representations, as postulated by the authors. These experiments would also contribute to dispelling the aforementioned concerns about the static nature of the learned representations.
4) Conclusion
Overall, this paper proposes an interesting architecture for an important problem, but requires additional experiments to substantiate the claims made by the authors. If the authors make the aforementioned additional experiments and the results are convincing, then this paper would be clearly relevant for ICLR.
5) Post-rebuttal final decision
The authors did a significant amount of additional work, following the suggestions made by the reviewers, and providing additional compelling experimental evidence. This makes this one of the most experimentally thorough ones for this problem. I, therefore, increase my rating, and suggest to accept this paper. Good job! | 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
HySrJeGNl | rkEFLFqee | The paper presents a method for predicting video sequences in the lines of Mathieu et al. The contribution is the separation of the predictor into two different networks, picking up motion and content, respectively.
The paper is very interesting, but the novelty is low compared to the referenced work. As also pointed out by AnonReviewer1, there is a similarity with two-stream networks (and also a whole body of work building on this seminal paper). Separating motion and content has also been proposed for other applications, e.g. pose estimation.
Details :
The paper can be clearly understood if the basic frameworks (like GANs) are known, but the presentation is not general and good enough for a broad public.
Example : Losses (7) to (9) are well known from the Matthieu et al. paper. However, to make the paper self-contained, they should be properly explained, and it should be mentioned that they are "additional" losses. The main target is the GAN loss. The adversarial part of the paper is not properly enough introduced. I do agree, that adversarial training is now well enough known in the community, but it should still be properly introduced. This also involves the explanation that L_Disc is the loss for a second network, the discriminator and explaining the role of both etc.
Equation (1) : c is not explained (are these motion vectors)? c is also overloaded with the feature dimension c'.
The residual nature of the layer should be made more apparent in equation (3).
There are several typos, absence of articles and prepositions ("of" etc.). The paper should be reread carefully.
| 6: Marginally above acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6 | 4 |
|
Hky8MaWVx | BkIqod5ll | Important problem, but lacks clarity and I'm not sure what the contribution is. | This work proposes a convolutional architecture for any graph-like input data (where the structure is example-dependent), or more generally, any data where the input dimensions that are related by a similarity matrix. If instead each input example is associated with a transition matrix, then a random walk algorithm is used generate a similarity matrix.
Developing convolutional or recurrent architectures for graph-like data is an important problem because we would like to develop neural networks that can handle inputs such as molecule structures or social networks. However, I don't think this work contributes anything significant to the work that has already been done in this area.
The two main proposals I see in this paper are:
1) For data associated with a transition matrix, this paper proposes that the transition matrix be converted to a similarity matrix. This seems obvious.
2) For data associated with a similarity matrix, the k nearest neighbors of each node are computed and supply the context information for that node. This also seems obvious.
Perhaps I have misunderstood the contribution, but the presentation also lacks clarity, and I cannot recommend this paper for publication.
Specific Comments:
1) On page 4: "An interesting attribute of this convolution, as compared to other convolutions on graphs is that, it preserves locality while still being applicable over different graphs with different structures." This is false; the other proposed architectures can be applied to inputs with different structures (e.g. Duvenaud et. al., Lusci et. al. for NN architectures on molecules specifically). | 3: Clear rejection | 3: The reviewer is fairly confident that the evaluation is correct | 3 | 3 |
S1bH1BMNg | BkIqod5ll | Final review. | Update: I thank the authors for their comments! After reading them, I decided to increase the rating.
This paper proposes a variant of the convolution operation suitable for a broad class of graph structures. For each node in the graph, a set of neighbours is devised by means of random walk (the neighbours are ordered by the expected number of visits). As a result, the graph is transformed into a feature matrix resembling MATLAB’s/Caffe’s im2col output. The convolution itself becomes a matrix multiplication.
Although the proposed convolution variant seems reasonable, I’m not convinced by the empirical evaluation. The MNIST experiment looks especially suspicious. I don’t think that this dataset is appropriate for the demonstration purposes in this case. In order to make their method applicable to the data, the authors remove important structural information (relative locations of pixels) thus artificially increasing the difficulty of the task. At the same time, they are comparing their approach with regular CNNs and conclude that the former performs poorly (and does not even reach an acceptable accuracy for the particular dataset).
I guess, to justify the presence of MNIST (or similar datasets) in the experimental section, the authors should modify their method to incorporate additional graph structure (e.g. relative locations of nodes) in cases when the relation between nodes cannot be fully described by a similarity matrix.
I believe, in its current form, the paper is not yet ready for publication but may be later resubmitted to a workshop or another conference after the concern above is addressed. | 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
Sk0nICB4l | BkIqod5ll | Modifies the way neighbors are computed for Graph-convolutional networks, but doesn't show that this modification is an improvement.. | Previous literature uses data-derived adjacency matrix A to obtain neighbors to use as foundation of graph convolution. They propose extending the set of neighbors by additionally including nodes reachable by i<=k steps in this graph. This introduces an extra tunable parameter k, so it needs some justification over the previous k=1 solution. In one experiment provided (Merk), using k=1 worked better. They don't specify which k that used, just that it was big enough for their to be p=5 nodes obtained as neighbors. In the second experiment (MNIST), they used k=1 for their experiments, which is what previous work (Coats & Ng 2011) proposed as well. A compelling experiment would compare to k=1 and show that using k>1 gives improvement strong enough to justify an extra hyper-parameter. | 3: Clear rejection | 3 | -1 |
|
ByQ-cqT7x | rJfMusFll | clearly written, natural extension of previous work | The paper discuss a "batch" method for RL setup to improve chat-bots.
The authors provide nice overview of the RL setup they are using and present an algorithm which is similar to previously published on line setup for the same problem. They make a comparison to the online version and explore several modeling choices.
I find the writing clear, and the algorithm a natural extension of the online version.
Below are some constructive remarks:
- Comparison of the constant vs. per-state value function: In the artificial experiment there was no difference between the two while on the real-life task there was. It will be good to understand why, and add this to the discussion. Here is one option:
- For the artificial task it seems like you are giving the constant value function an unfair advantage, as it can update all the weights of the model, and not just the top layer, like the per-state value function.
- section 2.2:
sentence before last: s' is not defined.
last sentence: missing "... in the stochastic case." at the end.
- Section 4.1 last paragraph: "While Bot-1 is not significant ..." => "While Bot-1 is not significantly different from ML ..."
| 8: Top 50% of accepted papers, clear accept | 3: The reviewer is fairly confident that the evaluation is correct | 8 | 3 |
ByVsGkMVx | rJfMusFll | Review | This paper extends neural conversational models into the batch reinforcement learning setting. The idea is that you can collect human scoring data for some responses from a dialogue model, however such scores are expensive. Thus, it is natural to use off-policy learning – training a base policy on unsupervised data, deploying that policy to collect human scores, and then learning off-line from those scores.
While the overall contribution is modest (extending off-policy actor-critic to the application of dialogue generation), the approach is well-motivated, and the paper is written clearly and is easy to understand.
My main concern is that the primary dataset used (restaurant recommendations) is very small (6000 conversations). In fact, it is several orders of magnitude smaller than other datasets used in the literature (e.g. Twitter, the Ubuntu Dialogue Corpus) for dialogue generation. It is a bit surprising to me that RNN chatbots (with no additional structure) are able to generate reasonable utterances on such a small dataset. Wen et al. (2016) are able to do this on a similarly small restaurant dataset, but this is mostly because they map directly from dialogue states to surface form, rather than some embedding representation of the context. Thus, it remains to be seen if the approaches in this paper also result in improvements when much more unsupervised data is available.
References:
Wen, Tsung-Hsien, Milica Gasic, Nikola Mrksic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. "A Network-based End-to-End Trainable Task-oriented Dialogue System." arXiv preprint arXiv:1604.04562 (2016).
| 6: Marginally above acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 6 | 3 |
H1bSmrx4x | rJfMusFll | The author propose to use a off-policy actor-critic algorithm in a batch-setting to improve chat-bots.
The approach is well motivated and the paper is well written, except for some intuitions for why the batch version outperforms the on-line version (see comments on "clarification regarding batch vs. online setting").
The artificial experiments are instructive, and the real-world experiments were performed very thoroughly although the results show only modest improvement. | 7: Good paper, accept | 3: The reviewer is fairly confident that the evaluation is correct | 7 | 3 |
|
rkEX3x_Nx | rywUcQogx | Unclear about the contribution | It is not clear to me at all what this paper is contributing. Deep CCA (Andrew et al, 2013) already gives the gradient derivation of the correlation objective with respect to the network outputs which are then back-propagated to update the network weights. Again, the paper gives the gradient of the correlation (i.e. the CCA objective) w.r.t. the network outputs, so it is confusing to me when authors say that their differentiable version enables them to back-propagate directly through the computation of CCA.
| 3: Clear rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3 | 4 |
SJ-aT5ZNg | rywUcQogx | paper needs to be more explicit | After a second look of the paper, I am still confused what the authors are trying to achieve.
The CCA objective is not differentiable in the sense that the sum of singular values (trace norm) of T is not differentiable. It appears to me (from the title, and section 3), the authors are trying to solve this problem. However,
-- Did the authors simply reformulate the CCA objective or change the objective? The authors need to be explicit here.
-- What is the relationship between the retrieval objective and the "CCA layer"? I could imagine different ways of combining them, such as combination or bi-level optimization. And I could not find discussion about this in section 3. For this, equations would be helpful.
-- Even though the CCA objective is not differentiable in the above sense, it has not caused major problem for training (e.g., in principle we need batch training, but empirically using large minibatches works fine). The authors need to justify why the original gradient computation is problematic for what the authors are trying to achieve. From the authors' response to my question 2, it seems they still use SVD of T, so I am not sure if the proposed method has advantage in computational efficiency.
In terms of paper organization, it is better to describe the retrieval objective earlier than in the experiments. And I still encourage the authors to conduct the comparison with contrastive loss that I mentioned in my previous comments. | 4: Ok but not good enough - rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4 | 4 |
ry-2Cn1Eg | rywUcQogx | Needs significant work before it can be publishable | The authors propose to combine a CCA objective with a downstream loss. This is a really nice and natural idea. However, both the execution and presentation leave a lot to be desired in the current version of the paper.
It is not clear what the overall objective is. This was asked in a pre-review question but the answer did not fully clarify it for me. Is it the sum of the CCA objective and the final (top-layer) objective, including the CCA constraints? Is there some interpolation of the two objectives?
By saying that the top-layer objective is "cosine distance" or "squared cosine distance", do you really mean you are just minimizing this distance between the matched pairs in the two views? If so, then of course that does not work out of the box without the intervening CCA layer: You could minimize it by setting all of the projections to a single point. A better comparison would be against a contrastive loss like the Hermann & Blunsom one mentioned in the reviewer question, which aims to both minimize the distance for matched pairs and separate mismatched ones (where "mismatched" ones can be uniformly drawn, or picked in some cleverer way). But other discriminative top-layer objectives that are tailored to a downstream task could make sense.
There is some loose terminology in the paper. The authors refer to the "correlation" and "cross-correlation" between two vectors. "Correlation" normally applies to scalars, so you need to define what you mean here. "Cross-correlation" typically refers to time series. In eq. (2) you are taking the max of a matrix. Finally I am not too sure in what way this approach is "fully differentiable" while regular CCA is not -- perhaps it is worth revisiting this term as well.
Also just a small note about the relationship between cosine distance and correlation: they are related when we view the dimensions of each of the two vectors as samples of a single random variable. In that case the cosine distance of the (mean-normalized) vectors is the same as the correlation between the two corresponding random variables. In CCA we are viewing each dimension of the vectors as its own random variable. So I fear the claim about cosine distance and correlation is a bit of a red herring here.
A couple of typos:
"prosed" --> "proposed"
"allong" --> "along"
| 3: Clear rejection | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3 | 4 |
rypQ3tJ4e | HkuVu3ige | This paper investigates the issue of orthogonality of the transfer weight matrix in RNNs and suggests an optimization formulation on the manifold of (semi)orthogonal matrices. | Vanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section:
1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank.
2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task.
3- I was not able to find the number of hidden units used for RNNs in different tasks.
4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used.
5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right?
6- What do we learn from Figure 2? It is left without any discussion. | 5: Marginally below acceptance threshold | 5 | -1 |
|
ryRAK-8Vg | HkuVu3ige | Interesting investigation into orthogonal parametrizations and initializations for RNNs | This paper investigates the impact of orthogonal weight matrices on learning dynamics in RNNs. The paper proposes a variety of interesting optimization formulations that enforce orthogonality in the recurrent weight matrix to varying degrees. The experimental results demonstrate several conclusions: enforcing exact orthogonality does not help learning, while enforcing soft orthogonality or initializing to orthogonal weights can substantially improve learning. While some of the optimization methods proposed currently require matrix inversion and are therefore slow in wall clock time, orthogonal initialization and some of the soft orthogonality constraints are relatively inexpensive and may find their way into practical use.
The experiments are generally done to a high standard and yield a variety of useful insights, and the writing is clear.
The experimental results are based on using a fixed learning rate for the different regularization strengths. Learning speed might be highly dependent on this, and different strengths may admit different maximal stable learning rates. It would be instructive to optimize the learning rate for each margin separately (maybe on one of the shorter sequence lengths) to see how soft orthogonality impacts the stability of the learning process. Fig. 5, for instance, shows that a sigmoid improves stability—but perhaps slightly reducing the learning rate for the non-sigmoid Gaussian prior RNN would make the learning well-behaved again for weightings less than 1.
Fig. 4 shows singular values converging around 1.05 rather than 1. Does initializing to orthogonal matrices multiplied by 1.05 confer any noticeable advantage over standard orthogonal matrices? Especially on the T=10K copy task?
“Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal suggesting that evolution away from orthogonality is not a serious problem on this task.” This is consistent with the analysis given in Saxe et al. 2013, where for deep linear nets, if a singular value is initialized to 1 but dies away during training, this is because it must be zero to implement the desired input-output map. More broadly, an open question has been whether orthogonality is useful as an initialization, as proposed by Saxe et al., where its role is mainly as a preconditioner which makes optimization proceed quickly but doesn’t fundamentally change the optimization problem; or whether it is useful as a regularizer, as proposed by Arjovsky et al. 2015 and Henaff et al. 2015, that is, as an additional constraint in the optimization problem (minimize loss subject to weights being orthogonal). These experiments seem to show that mere initialization to orthogonal weights is enough to reap an optimization speed advantage, and that too much regularization begins to hurt performance—i.e., substantially changing the optimization problem is undesirable. This point is also apparent in Fig. 2: In terms of the training loss on MNIST (Fig. 2), no margin does almost indistinguishably from a margin of 1 or .1. However in terms of accuracy, a margin of .1 is best. This shows that large or nonexistent margins (i.e., orthogonal initializations) enable fast optimization of the training loss, but among models that attain similar training loss, the more nearly orthogonal weights perform better. This starts to separate out the optimization speed advantage conferred by orthogonality from the regularization advantage it confers. It may be useful to more explicitly discuss the initialization vs regularization dimension in the text.
Overall, this paper contributes a variety of techniques and intuitions which are likely to be useful in training RNNs.
| 7: Good paper, accept | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7 | 4 |
ByCXAcHVl | HkuVu3ige | Interesting question and proposed approach, with significance restricted by limited experimental settings. | The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution.
The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously.
The experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don't shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task).
Furthermore, while investigating pure RNN's makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there's any benefit when embedded within an LSTM, although this seems doubtful).
Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures). | 5: Marginally below acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5 | 4 |
H1cHmCBNg | B1KBHtcel | An Application of PN Network | This paper addresses automated argumentation mining using pointer network. Although the task and the discussion is interesting, the contribution and the novelty is marginal because this is a single-task application of PN among many potential tasks. | 4: Ok but not good enough - rejection | 3: The reviewer is fairly confident that the evaluation is correct | 4 | 3 |
HkJF5ei7l | B1KBHtcel | Solid work, fit unclear | This paper proposes a model for the task of argumentation mining (labeling the set of relationships between statements expressed as sentence-sized spans in a short text). The model combines a pointer network component that identifies links between statements and a classifier that predicts the roles of these statements. The resulting model works well: It outperforms strong baselines, even on datasets with fewer than 100 training examples.
I don't see any major technical issues with this paper, and the results are strong. I am concerned, though, that the paper doesn't make a substantial novel contribution to representation learning. It focuses on ways to adapt reasonably mature techniques to a novel NLP problem. I think that one of the ACL conferences would be a better fit for this work.
The choice of a pointer network for this problem seems reasonable, though (as noted by other commenters) the paper does not make any substantial comparison with other possible ways of producing trees. The paper does a solid job at breaking down the results quantitatively, but I would appreciate some examples of model output and some qualitative error analysis.
Detail notes:
- Figure 2 appears to have an error. You report that the decoder produces a distribution over input indices only, but you show an example of the network pointing to an output index in one case.
- I don't think "Wei12" is a name. | 5: Marginally below acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5 | 4 |
rJA1LgTQg | B1KBHtcel | Review | This paper addresses the problem of argument mining, which consists of finding argument types and predicting the relationships between the arguments. The authors proposed a pointer network structure to recover the argument relations. They also propose modifications on pointer network to perform joint training on both type and link prediction tasks. Overall the model is reasonable, but I am not sure if ICLR is the best venue for this work.
My first concern of the paper is on the novelty of the model. Pointer network has been proposed before. The proposed multi-task learning method is interesting, but the authors only verified it on one task. This makes me feel that maybe the submission is more for a NLP conference rather than ICLR.
The authors stated that the pointer network is less restrictive compared to some of the existing tree predicting method. However, the datasets seem to only contain single trees or forests, and the stack-based method can be used for forest prediction by adding a virtual root node to each example (as done in the dependency parsing tasks). Therefore, I think the experiments right now cannot reflect the advantages of pointer network models unfortunately.
My second concern of the paper is on the target task. Given that the authors want to analyze the structures between sentences, is the argumentation mining the best dataset? For example, authors could verify their model by applying it to the other tasks that require tree structures such as dependency parsing. As for NLP applications, I found that the assumption that the boundaries of AC are given is a very strong constraint, and could potentially limit the usefulness of the proposed model.
Overall, in terms of ML, I also feel that baseline methods the authors compared to are probably strong for the argument mining task, but not necessary strong enough for the general tree/forest prediction tasks (as there are other tree/forest prediction methods). In terms of NLP applications, I think the assumption of having AC boundaries is too restrictive, and maybe ICLR is not the best venture for this submission.
| 5: Marginally below acceptance threshold | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5 | 4 |
H1snDRS4e | ryTYxh5ll | Interesting problem and good motivation, unconvincing solution architecture | The problem of utilizing all available information (across modalities) about a product to learn a meaningful "joint" embedding is an interesting one, and certainly seems like it a promising direction for improving recommender systems, especially in the "cold start" scenario. I'm unaware of approaches combining as many modalities as proposed in this paper, so an effective solution could indeed be significant. However, there are many aspects of the proposed architecture that seem sub-optimal to me:
1. A major benefit of neural-network based systems is that the entire system can be trained end-to-end, jointly. The proposed approach sticks together largely pre-trained modules for different modalities... this can be justifiable when there is very little training data available on which to train jointly. With 10M product pairs, however, this doesn't seem to be the case for the Amazon dataset (although I haven't worked with this dataset myself so perhaps I'm missing something... either way it's not discussed at all in the paper). I consider the lack of a jointly fine-tuned model a major shortcoming of the proposed approach.
2. The discussion of "pairwise residual units" is confusing and not well-motivated. The residual formulation (if I understand it correctly) applies a ReLU layer to the concatenation of the modality specific embeddings, giving a new similarity (after dot products) that can be added to the similarity obtained from the concatenation directly. Why not just have an additional fully-connected layer that mixes the modality specific embeddings to form a final embedding (perhaps of lower dimensionality)? This should at least be presented as a baseline, if the pairwise residual unit is claimed as a contribution... I don't find the provided explanation convincing (in what way does the residual approach reduce parameter count?).
3. More minor: The choice of TextCNN for the text embedding vectors seems fine (although I wonder how an LSTM-based approach would perform)... However the details surrounding how it is used are obscured in the paper. In response to a question, the authors mention that it runs on the concatenation of the first 10 words of the title and product description. Especially for the description, this seems insufficiently long to contain a lot of information to me.
More care could be given to motivating the choices made in the paper. Finally, I'm not familiar with state of the art on this dataset... do the comparisons accurately reflect it? It seems only one competing technique is presented, with none on the more challenging cold-start scenarios.
Minor detail: In the second paragraph of page 3, there is a reference that just says (cite Julian). | 3: Clear rejection | 3: The reviewer is fairly confident that the evaluation is correct | 3 | 3 |
BkHGIg4Vx | ryTYxh5ll | This paper proposes combining different modalities of product content (e.g. review text, images, co-purchase info ...etc) in order to learn one unified product representation for recommender systems. While the idea of combining multiple sources of information is indeed an effective approach for handling data sparsity in recommender systems, I have some reservations on the approach proposed in this paper:
1) Some modalities are not necessarily relevant for the recommendation task or item similarity. For example, cover images of books or movies (which are product types in the experiments of this paper) do not tell us much about their content. The paper should clearly motivate and show how different modalities contribute to the final task.
2) The connection between the proposed joint product embedding and residual networks is a bit awkward. The original residual layers are composed of adding the original input vector to the output of an MLP, i.e. several affine transformations followed by non-linearities. These layers allow training very deep neural networks (up to 1000 layers) as a result of easier gradient flow. In contrast, the pairwise residual unit of this paper adds the dot product of two item vectors to the dot product of the same vectors but after applying a simple non-linearity. The motivation of this architecture is not very obvious, and is not well motivated in the paper.
3) While it is a minor point, but the choice of the term embedding for the dot product of two items is not usual. Embeddings usually refer to vectors in R^n, and for specific entities. Here it refers to the final output, and renders the output layer in Figure 2 pointless.
Finally, I believe the paper can be improved by focusing more on motivating architectural choices, and being more concise in your description. The paper is currently very long (11 pages) and I strongly encourage you to shorten it.
| 3: Clear rejection | 3: The reviewer is fairly confident that the evaluation is correct | 3 | 3 |
|
rkEPBMlEe | ryTYxh5ll | The paper proposes a method to combine arbitrary content into recommender systems, such as images, text, etc. These various features have been previously used to improve recommender systems, though what's novel here is the contribution of a general-purpose framework to combine arbitrary feature types.
Positively, the idea of combining many heterogeneous feature types into RS is ambitious and fairly novel. Previous works have certainly sought to include various feature types to improve RSs, though combining different features types successfully is difficult.
Negatively, there are a few aspects of the paper that are a bit ad-hoc. In particular:
-- There are a lot of pieces here being "glued together" to build the system. Different parts are trained separately and then combined together using another learning stage. There's nothing wrong with doing things in this way (and indeed it's the most straightforward and likely to work approach), but it pushes the contribution more toward the "system building" direction as opposed to the "end-to-end learning" direction which is more the focus of this conference.
-- Further to the above, this makes it hard to say how easily the model would generalize to arbitrary feature types, say e.g. if I had audio or video features describing the item. To incorporate such features into the system would require a lot of implementation work, as opposed to being a system where I can just throw more features in and expect it to work.
The pre-review comments address some of these issues. Some of the responses aren't entirely convincing, e.g. it'd be better to have the same baselines across tables, rather than dropping some because "the case had already been made elsewhere".
Other than that, I like the effort to combine several different feature types in real recommender systems datasets. I'm not entirely sure how strong the baselines are, they seem more like ablation-style experiments rather than comparison against any state-of-the-art RS.
| 5: Marginally below acceptance threshold | 3: The reviewer is fairly confident that the evaluation is correct | 5 | 3 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 5