note_id
stringlengths
9
12
forum_id
stringlengths
9
13
review_title
stringlengths
0
500
review_body
stringlengths
1
31.1k
review_rating
stringlengths
0
31.1k
review_confidence
stringclasses
38 values
review_rating_integer
int64
-1
106
review_confidence_integer
int64
-1
5
rJ_HlyGEx
By14kuqxx
Evaluation of DNN inference hardware approach in simulator. Avoid processing zero-bits and achieve speed improvements.
An interesting idea, and seems reasonably justified and well-explored in the paper, though this reviewer is no expert in this area, and not familiar with the prior work. Paper is fairly clear. Performance evaluation (in simulation) is on a reasonable range of recent image conv-nets, and seems thorough enough. Rather specialized application area may have limited appeal to ICLR audience. (hence the "below threshold rating", I don't have any fundamental structural / methodological criticism for this paper.) Improve your bibliography citation style - differentiate between parenthetical citations and inline citations where only the date is in parentheses.
5: Marginally below acceptance threshold
5
-1
Hk2YlkzVg
BkVsEMYel
Interesting analysis
This paper addresses the question of which functions are well suited to deep networks, as opposed to shallow networks. The basic intuition is convincing and fairly straightforward. Pooling operations bring together information. When information is correlated, it can be more efficiently used if the geometry of pooling regions matches the correlations so that it can be brought together more efficiently. Shallow networks without layers of localized pooling lack this mechanism to combine correlated information efficiently. The theoretical results are focused on convolutional arithmetic circuits, building on prior theoretical results of the authors. The results make use of the interesting technical notion of separability, which in some sense measures the degree to which a function can be represented as the composition of independent functions. Because separability is measured relative to a partition of the input, it is an appropriate mechanism for measuring the complexity of functions relative to a particular geometry of pooling operations. Many of the technical notions are pretty intuitive, although the tensor analysis is pretty terse and not easy to follow without knowledge of the authors’ prior work. In some sense the comparison between deep and shallow networks is somewhat misleading, since the shallow networks lack a hierarchical pooling structure. For example, a shallow convolutional network with RELU and max pooling does not really make sense, since the max occurs over the whole image. So it seems that the paper is really more of an analysis of the effect of pooling vs. not having pooling. For example, it is not clear that a deep CNN without pooling would be any more efficient than a shallow network, from this work. It is not clear how much the theoretical results depend on the use of a model with product pooling, and how they might be extended to the more common max pooling. Even if theoretical results are difficult to derive in this case, simple illustrative examples might be helpful. In fact, if the authors prepare a longer version of the paper for a journal I think the results could be made more intuitive if they could add a simple toy example of a function that can be efficiently represented with a convolutional arithmetic circuit when the pooling structure fits the correlations, and perhaps showing also how this could be represented with a convolutional network with RELU and max pooling. I would also appreciate a more explicit discussion of how the depth of a deep network affects the separability of functions that can be represented. A shallow network doesn’t have local pooling, so the difference between deep and shallow if perhaps mostly one of pooling vs. not pooling. However, practitioners find that very deep networks seem to be more effective than “deep” networks with only a few convolutional layers and pooling. The paper does not explicitly discuss whether their results provide insight into this behavior. Overall, I think that the paper attacks an important problem in an interesting way. It is not so convincing that this really gets to the heart of why depth is so important, because of the theoretical limitation to arithmetic circuits, and because the comparison is to shallow networks that are without localized pooling.
6: Marginally above acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
6
3
BJsXW08Nx
BkVsEMYel
Huge algebraic machinery, so far only used to perform intuitive model selection, but promising direction.
The paper provides a highly complex algebraic machinery to analyze the type of functions covered by convolutional network. As in most attempts in this direction in the literature, the ideal networks described in paper, which have to be interpretable as polynomials over tensors, do not match the type of CNNs used in practice: for instance the Relu non-linearity is replaced with a product of linear functions (or a sum of logs). While the paper is very technical to read, every concept is clearly stated and mathematical terminology properly introduced. Still, I think some the authors could make some effort to make the key concepts more accessible, and give a more intuitive understanding of what the separation rank means rather before piling up different mathematical interpretation. My SVM-era algebra is quite rusted, and I am not familiar with the separation rank framework: it would have been much easier for me to first fully understand a simple and gentle case (shallow network in section 5.3), than the general deep case. To summarize my understanding of the key theorem 1 result: - The upper bound of the separation rank is used to show that in the shallow case, this rank grows AT MOST linearly with the network size (as measured by the only hidden layer). So exponential network sizes are caused by this rank needing to grow exponentially, as required by the partition. - In the deep case, one also uses the case that the upper bound is linear in the size of the network (as measured by the last hidden layer), however, this situation is caused by the selection of a partition (I^low, J^high), and the maximal rank induced by this partition is only linear anyway, hence the network size can remain linear. If tried my best to summarize the key point of this paper and still probably failed at it, which shows how complex is this notion of partition rank, and that its linear growth with network size can either be a good or bad thing depending on the setting. Hopefully, someone will come one day with an explanation that holds in a single slide. While this is worth publishing as conference paper in its present form, I have two suggestions that, IMHO, would make this work more significant: On the theory side, we are still very far from the completeness of the PAC bound papers of the "shallow era". In particular, the non-probabilistic lower and upper bound in theorem 1 are probably loose, and there is no PAC-like theory to tell us which one to use and what is the predicted impact on performance (not just the intuition). Also, in the prediction of the inductive bias, the other half is missing. This paper attempts to predict the maximal representation capacity of a DNN under bounded network size constraints, but one of the reason why this size has to be bounded is overfitting (justified by PAC or VC-dim like bounds). If we consider the expected risk as the sum of the empirical risk and the structural risk, this paper only seems to address fully the empirical risk minimization part, freezing the structural risk. On the practice side, an issue is that experiments in this paper mostly confirm what is obvious through intuition, or some simpler form of reasonings. For instance to use convolutions that join pixels which are symmetrical in images to detect symmetry. Basic hand-crafted pattern detectors, as they have been used in computer vision for decades, would just do the job. What would be a great motivation for using this framework is if it answered questions that simple human intuition cannot, and for which we are still in the dark: one example I could think of in the recent use of gated convolutions 'a trous' for 1D speech signal, popularized in Google WaveNet (https://deepmind.com/blog/wavenet-generative-model-raw-audio/). Note that 1D inputs would also simplify the notation!
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
Hk0ZrUlIx
BkVsEMYel
Promising approach to show why deep CNN works well in practice
This paper investigates the fact why deep networks perform well in practice and how modifying the geometry of pooling can make the polynomially sized deep network to provide a function with exponentially high separation rank (for certain partitioning.) In the authors' previous works, they showed the superiority of deep networks over shallows when the activation function is ReLu and the pooling is max/mean pooling but in the current paper there is no activation function after conv and the pooling is just a multiplication of the node values. Although for the experimental results they've considered both scenarios. Actually, the general reasoning for this problem is hard, therefore, this drawback is not significant and the current contribution adds a reasonable amount of knowledge to the literature. This paper studies the convolutional arithmetic circuits and shows how this model can address the inductive biases and how pooling can adjust these biases. This interesting contribution gives an intuition about how deep network can capture the correlation between the input variables when its size is polynomial but and correlation is exponential. It worth to note that although the authors tried to express their notation and definitions carefully where they were very successful, it would be helpful if they elaborate a bit more on their definitions, expressions, and conclusions in the sense to make them more accessible.
7: Good paper, accept
7
-1
SyEvEhMVg
Hyq4yhile
Transfer learning in RL using a nonlinear CCA like approach
This paper explores transfer in reinforcement learning between agents that may be morphologically distinct. The key idea is for the source and target agent to have learned a shared skill, and then to use this to construct abstract feature spaces to enable the transfer of a new unshared skill in the source agent to the target agent. The paper is related to much other work on transfer that uses shared latent spaces, such as CCA and its variants, including manifold alignment and kernel CCA. The paper reports on experiments using a simple physics simulator between robot arms consisting of three vs. four links. For comparison, a simple CCA based approach is shown, although it would have been preferable to see comparisons for something more current and up to date, such as manifold alignment or kernel CCA. A three layer neural net is used to construct the latent feature spaces. The problem of transfer in RL is extremely important, and receives less attention than it should. This work uses an interesting hypothesis of trying to construct transfer based on shared skills between source and target agent. This is a promising approach. However, the comparisons to related approaches is not very up to date, and the domains are fairly simplistic. There is little by way of theoretical development of the ideas using MDP theory.
6: Marginally above acceptance threshold
6
-1
SyMMfs-Ve
Hyq4yhile
Review
The paper considers the problem of transferring skills between robots with different morphologies, in the context of agents that have to perform several tasks. A core component of the proposed approach is to use a task-invariant future space, which can be shared between tasks & between agents. Compared to previous work (Ammar et al. 2015), it seems the main contribution here is to “assume that good correspondences in episodic tasks can be extracted through time alignment” (Sec. 2). This is an interesting hypothesis. There is also similarity to work by Raimalwala et al (2016), but the authors argue their method is better equipped to handle non-linear dynamics. These are two interesting hypotheses, however I don’t see that they have been verified in the presented empirical results. In particular, the question of the pairing correspondence seems crucial. What happens when the time alignment is not suitable. Is it possible to use dynamic time warping (or similar method) to achieve reasonable results? Robustness to misspecification of the pairing correspondence P seems a major concern. In general, more comparison to other transfer methods, including those listed in Sec.2, would be very valuable. The addition of Sec.5.1 is definitely a right step in this direction, but represents a small portion of the recent work on transfer learning. I appreciate that other methods transfer other pieces of information (e.g. the policy), but still if the end goal is better performance, what is worth transferring (in addition to how to do the transfer) should be a reasonable question to explore. Overall, the paper tackles an important problem, but this is a very active area of research, and further comparison to other methods would be worthwhile. The method proposed of transferring the representation is well motivated, cleanly described, and conceptually sound. The assumption that time alignment can be used for the state pairing seems problematic, and should be further validated.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
S1zWvGXNg
Hyq4yhile
Review
This paper presents an approach for skills transfer from one task to another in a control setting (trained by RL) by forcing the embeddings learned on two different tasks to be close (L2 penalty). The experiments are conducted in MuJoCo, with a set of experiments being from the state of the joints/links (5.2/5.3) and a set of experiments on the pixels (5.4). They exhibit transfer from arms with different number of links, and from a torque-driven arm to a tendon-driven arm. One limitation of the paper is that the authors suppose that time alignment is trivial, because the tasks are all episodic and in the same domain. Time alignment is one form of domain adaptation / transfer that is not dealt with in the paper, that could be dealt with through subsampling, dynamic time warping, or learning a matching function (e.g. neural network). General remarks: The approach is compared to CCA, which is a relevant baseline. However, as the paper is purely experimental, another baseline (worse than CCA) would be to just have the random projections for "f" and "g" (the embedding functions on the two domains), to check that the bad performance of the "no transfer" version of the model is due to over-specialisation of these embeddings. I would also add (for information) that the problem of learning invariant feature spaces is also linked to metric learning (e.g. [Xing et al. 2002]). More generally, no parallel is drawn with multi-task learning in ML. In the case of knowledge transfer (4.1.1), it may make sense to anneal \alpha. The experiments feel a bit rushed. In particular, the performance of the baseline being always 0 (no transfer at all) is uninformative, at least a much bigger sample budget should be tested. Also, why does Figure 7.b contain no "CCA" nor "direct mapping" results? Another concern that I have with the experiments: (if/how) did the author control for the fact that the embeddings were trained with more iterations in the case of doing transfer? Overall, the study of transfer is most welcomed in RL. The experiments in this paper are interesting enough for publication, but the paper could have been more thorough.
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
BJfBNnAme
Bk3F5Y9lx
Addresses a fundamental limitation of the VAE. Great idea, well executed. Accept
This paper proposes an elegant solution to a very important problem in VAEs, namely that the model over-regularizes itself by killing off latent dimensions. People have used annealing of the KL term and “free bits” to hack around this issue but a better solution is needed. The offered solution is to introduce sparsity for the latent representation: for every input only a few latent distributions will be activated but across the dataset many latents can still be learned. What I didn’t understand is why the authors need the topology in this latent representation. Why not place a prior over arbitrary subsets of latents? That seems to increase the representational power a lot without compromising the solution to the problem you are trying to solve. Now the number of ways the latents can combine is no longer exponentially large, which seems a pity. The first paragraph on p.7 is a mystery to me: “An effect of this …samples”. How can under-utilization of model capacity lead to overfitting? The experiments are modest but sufficient. This paper has an interesting idea that may resolve a fundamental issue of VAEs and thus deserves a place in this conference.
8: Top 50% of accepted papers, clear accept
8
-1
B1w7Uhb4x
Bk3F5Y9lx
Interesting idea, experimental evidence doesn't confirm the presented story
The paper presents a version of a variational autoencoder that uses a discrete latent variable that masks the activation of the latent code, making only a subset (an "epitome") of the latent variables active for a given sample. The justification for this choice is that by letting different latent variables be active for different samples, the model is forced to use more of the latent code than a usual VAE. While the problem of latent variable over pruning is important and has been highlighted in the literature before in the context of variational inference, the proposed solution doesn't seem to solve it beyond, for instance, a mixture of VAEs. Indeed, a mixture of VAEs would have been a great baseline for the experiments in the paper, as it uses a categorical variable (the mixture component) along with multiple VAEs. The main difference between a mixture and an epitomic VAE is the sharing of parameters between the different "mixture components" in the epitomic VAE case. The experimental section presents misleading results. 1. The log-likelihood of the proposed models is evaluated with Parzen window estimator. A significantly more accurate lower bound on likelihood that is available for the VAEs is not reported. In reviewer's experience continuous MNIST likelihood of upwards of 900 nats is easy to obtain with a modestly sized VAE. 2. The exposition changes between dealing with binary MNIST and continuous MNIST experiments. This is confusing, because these versions of the dataset present different challenges for modeling with likelihood-based models. Continuous MNIST is harder to model with high-capacity likelihood optimizing models, because the dataset lies in a proper subspace of the 784-dimensional space (some pixels are always or almost always equal to 0), and hence probability density can be arbitrarily large on this subspace. Models that try to maximize the likelihood often exploit this option of maximizing the likelihood by concentrating the probability around the subspace at the expense of actually modeling the data. The samples of a well-tuned VAE trained on binary MNIST (or a VAE trained on continuous MNIST to which noise has been appropriately added) tend to look much better than the ones presented in experimental results. 3. The claim that the VAE uses its capacity to "overfit" to the training data is not justified. No evidence is presented that the reconstruction likelihood on the training data is significantly higher than the reconstruction likelihood on the test data. It's misleading to use a technical term like "overfitting" to mean something else. 4. The use of dropout in dropout VAE is not specified: is dropout applied to the latent variables, or to the hidden layers of the encoder/decoder? The two options will exhibit very different behaviors. 5. MNIST eVAE samples and reconstructions look more like a more diverse version of 2d VAE samples/reconstructions - they are blurry, the model doesn't encode precise position of strokes. This is consistent with an interpretation of eVAE as a kind of mixture of smaller VAEs, rather than a higher-dimensional VAE. It is misleading to claim that it outperforms a high-dimensional VAE based on this evidence. In reviewer's opinion the paper is not yet ready for publication. A stronger baseline VAE evaluated with evidence lower bound (or another reliable method) is essential for comparing the proposed eVAE to VAEs.
5: Marginally below acceptance threshold
5
-1
H1MkPyf4g
Bk3F5Y9lx
skeptical of motivation and experiments
This paper replaces the Gaussian prior often used in a VAE with a group sparse prior. They modify the approximate posterior function so that it also generates group sparse samples. The development of novel forms for the generative model and inference process in VAEs is an active and important area of research. I don't believe the specific choice of prior proposed in this paper is very well motivated however. I believe several of the conceptual claims are incorrect. The experimental results are unconvincing, and I suspect compare log likelihoods in bits against competing algorithms in nats. Some more detailed comments: In Table 1, the log likelihoods reported for competing techniques are all in nats. The reported log likelihood of cVAE using 10K samples is not only higher than the likelihood of true data samples, but is also higher than the log likelihood that can be achieved by fitting a 10K k-means mixture model to the data (eg as done in "A note on the evaluation of generative models"). It should nearly impossible to outperform a 10K k-means mixture on Parzen estimation, which makes me extremely skeptical of these eVAE results. However, if you assume that the eVAE log likelihood is actually in bits, and multiply it by log 2 to convert to nats, then it corresponds to a totally believable log likelihood. Note that some Parzen window implementations report log likelihood in bits. Is this experiment comparing log likelihood in bits to competing log likelihoods in nats? (also, label units -- eg bits or nats -- in table) It would be really, really, good to report and compare the variational lower bound on the log likelihood!! Alternatively, if you are concerned your bound is loose, you can use AIS to get a more exact measure of the log likelihood. Even if the Parzen window results are correct, Parzen estimates of log likelihood are extremely poor. They possess any drawback of log likelihood evaluation (which they approximate), and then have many additional drawbacks as well. The MNIST sample quality does not appear to be visually competitive. Also -- it appears that the images are of the probability of activation for each pixel, rather than actual samples from the model. Samples would be more accurate, but either way make sure to describe what is shown in the figure. There are no experiments on non-toy datasets. I am still concerned about most of the issues I raised in my questions below. Briefly, some comments on the authors' response: 1. "minibatches are constructed to not only have a random subset of training examples but also be balanced w.r.t. to epitome assignment (Alg. 1, ln. 4)." Nice! This makes me feel better about why all the epitomes will be used. 2. I don't think your response addresses why C_vae would trade off between data reconstruction and being factorial. The approximate posterior is factorial by construction -- there's nothing in C_vae that can make it more or less factorial. 3. "For C_vae to have zero contribution from the KL term of a particular z_d (in other words, that unit is deactivated), it has to have all the examples in the training set be deactivated (KL term of zero) for that unit" This isn't true. A standard VAE can set the variance to 1 and the mean to 0 (KL term of 0) for some examples in the training set, and have non-zero KL for other training examples. 4. The VAE loss is trained on a lower bound on the log likelihood, though it does have a term that looks like reconstruction error. Naively, I would imagine that if it overfits, this would correspond to data samples becoming more likely under the generative model. 5/6. See Parzen concerns above. It's strange to train a binary model, and then treat it's probability of activation as a sample in a continuous space. 6. "we can only evaluate the model from its samples" I don't believe this is true. You are training on a lower bound on the log likelihood, which immediately provides another method of quantitative evaluation. Additionally, you could use techniques such as AIS to compute the exact log likelihood. 7. I don't believe Parzen window evaluation is a better measure of model quality, even in terms of sample generation, than log likelihood.
4: Ok but not good enough - rejection
4
-1
BkNSquWEx
ry2YOrcge
A paper on a challenging task
This paper proposes a weakly supervised, end-to-end neural network model to learn a natural language interface for tables. The neural programmer is applied to the WikiTableQuestions, a natural language QA dataset and achieves reasonable accuracy. An ensemble further boosts the performance by combining components built with different configurations, and achieves comparable performance as the traditional natural language semantic parser baseline. Dropout and weight decay seem to play a significant role. It'll be interesting to see more error analysis and the major reason for the still low accuracy compared to many other NLP tasks. What's the headroom and oracle number with the current approach?
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
S1V4jFBNg
ry2YOrcge
An interesting paper for a rather hard problem.
The paper presents an end-to-end neural network model for the problem of designing natural language interfaces for database queries. The proposed approach uses only weak supervision signals to learn the parameters of the model. Unlike in traditional approaches, where the problem is solved by semantically parsing a natural language query into logical forms and executing those logical forms over the given data base, the proposed approach trains a neural network in an end-to-end manner which goes directly from the natural language query to the final answer obtained by processing the data base. This is achieved by formulating a collection of operations to be performed over the data base as continuous operations, the distributions over which is learnt using the now-standard soft attention mechanisms. The model is validated on the smallish WikiTableQuestions dataset, where the authors show that a single model performs worse than the approach which uses the traditional Semantic Parsing technique. However an ensemble of 15 models (trained in a variety of ways) results in comparable performance to the state of the art. I feel that the paper proposes an interesting solution to the hard problem of learning natural language interfaces for data bases. The model is an extension of the previously proposed models of Neelakantan 2016. The experimental section is rather weak though. The authors only show their model work on a single smallish dataset. Would love to see more ablation studies of their model and comparison against fancier version of memnns (i do not buy their initial response to not testing against memory networks). I do have a few objections though. -- The details of the model are rather convoluted and the Section 2.1 is not very clearly written. In particular with the absence of the accompanying code the model will be super hard to replicate. I wish the authors do a better job in explaining the details as to how exactly the discrete operations are modeled, what is the role of the "row selector", the "scalar answer" and the "lookup answer" etc. -- The authors do a full attention over the entire database. Do they think this approach would scale when the data bases are huge (millions of rows)? Wish they experimented with larger datasets as well.
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
r1CLSZMNe
ry2YOrcge
review
This paper proposes a weakly supervised, end-to-end neural network model for solving a challenging natural language understanding task. As an extension of the Neural Programmer, this work aims at overcoming the ambiguities imposed by natural language. By predefining a set of operations, the model is able to learn the interface between the language reasoning and answer composition using backpropagation. On the WikiTableQuestions dataset, it is able to achieve a slightly better performance than the traditional semantic parser methods. Overall, this is a very interesting and promising work as it involves a lot of real-world challenges about natural language understanding. The intuitions and design of the model are very clear, but the complication makes the paper a bit difficult to read, which means the model is also difficult to be reimplemented. I would expect to see more details about model ablation and it would help us figure out the prominent parts of the model design.
6: Marginally above acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
6
3
r1tEPyWEg
SyWvgP5el
Review
The paper looks at the problem of transferring a policy learned in a simulator to a target real-world system. The proposed approach considers using an ensemble of simulated source domains, along with adversarial training, to learn a robust policy that is able to generalize to several target domains. Overall, the paper tackles an interesting problem, and provides a reasonable solution. The notion of adversarial training used here does not seem the same as other recent literature (e.g. on GANs). It would be useful to add more details on a few components, as discussed in the question/response round. I also encourage including the results with alternative policy gradient subroutines, even if they don’t perform well (e.g. Reinforce), as well as results with and without the baseline on the value function. Such results are very useful to other researchers.
7: Good paper, accept
7
-1
BJwiMAWVe
SyWvgP5el
ICLR 2017 conference review
Paper addresses systematic discrepancies between simulated and real-world policy control domains. Proposed method contains two ideas: 1) training on an ensemble of models in an adversarial fashion to learn policies that are robust to errors and 2) adaptation of the source domain ensemble using data from a (real-world) target domain. > Significance Paper addresses and important and significant problem. The approach taken in addressing it is also interesting > Clarity Paper is well written, but does require domain knowledge to understand. My main concerns were well addressed by the rebuttal and corresponding revisions to the paper.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
BkwBVorNl
SyWvgP5el
Ensemble training and transfer, a good submission
This paper explores ensemble optimisation in the context of policy-gradient training. Ensemble training has been a low-hanging fruit for many years in the this space and this paper finally touches on this interesting subject. The paper is well written and accessible. In particular the questions posed in section 4 are well posed and interesting. That said the paper does have some very weak points, most obviously that all of its results are for a very particular choice of domain+parameters. I eagerly look forward to the journal version where these experiments are repeated for all sorts of source domain/target domain/parameter combinations. <rant Finally a stylistic comment that the authors can feel free to ignore. I don't like the trend of every paper coming up with a new acronymy wEiRDLY cAsEd name. Especially here when the idea is so simple. Why not use words? English words from the dictionary. Instead of "EPOpt" and "EPOpt-e", you can write "ensemble training" and "robust ensemble training". Is that not clearer? />
8: Top 50% of accepted papers, clear accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
8
4
HJcd-ZDrl
HyE81pKxl
Plagiarism ?
First I would like to apologize for the delay in reviewing. Section 2: I'm afraid this sentence " Simple Recurrent Neural Network which has been shown to be able to implement a Turing Machine [7] is an extension of feedforward neural networks. The idea in RNNs is that they share parameters for different time-steps. This idea, which is called parameter sharing, enables RNNs to be used for sequential data." and following paragraphs are copied and pasted from https://arxiv.org/abs/1501.00299 .
3: Clear rejection
3: The reviewer is fairly confident that the evaluation is correct
3
3
r1YBkDBEl
HyE81pKxl
No new ideas in the paper
The paper proposes a model for learning vector representations of sequences in a multi-task framework. This is achieved by having a single encoder to embed the "source" sequence and having multiple decoders and classifiers/regressors on top of it: one for each task. The authors argue that having multiple decoders on top acts as a regularizer and helps learn embeddings for tasks which do not have a lot of data. The authors also propose a way to incrementally learn embeddings for novel tasks: where the idea is to fix a part of the encoder network corresponding to previous tasks and add a smaller encoder in parallel to learn embeddings for the new task. My main concern with the paper is its novelty. The ideas proposed in this paper have been around since quite sometime and there is nothing new that the paper has to offer. Furthermore the experimental section is also rather weak. The authors only compare their model to a collection of variants of their own model, as opposed to any baseline in the literature. Furthermore even in the case of incremental multi-task learning, similar ideas have been dealt with before. See https://arxiv.org/pdf/1606.04671.pdf for instance. While the problem is not exactly the same, the ideas are quite similar.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
HkCuYIxEg
HyE81pKxl
my review
The paper presents an approach to do multi-task training, either jointly or incrementally, for sequence embedding tasks. This is similar in spirit to Luong et al and also borrows from others (like Collobert and Weston). In fact, the idea of reusing word vectors for other tasks with less data is very often used now, see for instance all the image captioning papers. The main difference of this paper is to try the incremental version to avoid forgetting and build on novel data, and provide experiments on a query based system. I found the paper too long (more than the proposed 8+1 limit) and containing too much introduction for this community (4 pages of background???). Overall, I didn't find enough novelty in the paper. The authors point to the query term weighting, which I see more as an application of the sequence embeddings rather than a novelty.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
BJB-lc-Nl
B1gtu5ilg
interesting connections to human perception
On one hand this paper is fairly standard in that it uses deep metric learning with a Siamese architecture. On the other, the connections to human perception involving persistence is quite interesting. I'm not an expert in human vision, but the comparison in general and the induced hierarchical groupings in particular seem like something that should interest people in this community. The experimental suite is ok but I was disappointed that it is 100% synthetic. The authors could have used a minimally viable real dataset such as ALOI http://aloi.science.uva.nl . In summary, the mechanics of the proposed approach are not new, but the findings about the transfer of similarity judgement to novel object classes are interesting.
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
SJFcgXEEl
B1gtu5ilg
Potentially interesting idea, but important references and baseline comparisons are missing.
This paper proposes a model to learn across different views of objects. The key insight is to use a triplet loss that encourages two different views of the same object to be closer than an image of a different object. The approach is evaluated on object instance and category retrieval and compared against baseline CNNs (untrained AlexNet and AlexNet fine-tuned for category classification) using fc7 features with cosine distance. Furthermore, a comparison against human perception on the "Tenenbaum objects” is shown. Positives: Leveraging a triplet loss for this problem may have some novelty (although it may be somewhat limited given some concurrent work; see below). The paper is reasonably written. Negatives: The paper is missing relevant references of related work in this space and should compare against an existing approach. More details: The “image purification” paper is very related to this work: [A] Joint Embeddings of Shapes and Images via CNN Image Purification. Hao Su*, Yangyan Li*, Charles Qi, Noa Fish, Daniel Cohen-Or, Leonidas Guibas. SIGGRAPH Asia 2015. There they learn to map CNN features to (hand-designed) light field descriptors of 3D shapes for view-invariant object retrieval. If possible, it would be good to compare directly against this approach (e.g., the cross-view retrieval experiment in Table 1 of [A]). It appears that code and data is available online (http://shapenet.github.io/JointEmbedding/). Somewhat related to the proposed method is recent work on multi-view 3D object retrieval: [B] Multi-View 3D Object Retrieval With Deep Embedding Network. Haiyun Guo, Jinqiao Wang, Yue Gao, Jianqiang Li, and Hanqing Lu. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 12, DECEMBER 2016. There they developed a triplet loss as well, but for multi-view retrieval (given multiple images of the same object). Given the similarity of the developed approach, it somewhat limits the novelty of the proposed approach in my view. Also related are approaches that predict a volumetric representation of an input 2D image (going from image to canonical orientation of 3D shape): [C] R. Girdhar, D. Fouhey, M. Rodriguez, A. Gupta. Learning a Predictable and Generative Vector Representation for Objects. ECCV 2016. [D] Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. Jiajun Wu*, Chengkai Zhang*, Tianfan Xue, William T. Freeman, and Joshua B. Tenenbaum. NIPS 2016. For the experiments, I would like to see a comparison using different feature layers (e.g., conv4, conv5, pool4, pool5) and feature comparison (dot product, Eucllidean). It has been shown that different layers and feature comparisons perform differently for a given task, e.g., [E] Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views. Francisco Massa, Bryan C. Russell, Mathieu Aubry. Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [F] Understanding Deep Features with Computer-Generated Imagery. Mathieu Aubry and Bryan C. Russell. IEEE International Conference on Computer Vision (ICCV), 2015.
5: Marginally below acceptance threshold
5
-1
SyEdJEGEe
B1gtu5ilg
Nice form of supervision to explore
I think learning a deep feature representation that is supervised to group dissimilar views of the same object is interesting. The paper isn't technically especially novel but that doesn't bother me at all. It does a good job exploring a new form of supervision with a new dataset. I'm also not bothered that the dataset is synthetic, but it would be good to have more experiments with real data, as well. I think the paper goes too far in linking itself to human vision. I would prefer the intro not have as much cognitive science or neuroscience. The second to fourth paragraphs of the intro in particular feels like it over-stating the contribution of this paper as somehow revealing some truth about human vision. Really, the narrative is much simpler -- "we often want deep feature representations that are viewpoint invariant. We supervise a deep network accordingly. Humans also have some capability to be viewpoint invariant which has been widely studied [citations]". I am skeptical of any claimed connections bigger than that. I think 3.1 should not be based on tree-to-tree distance comparisons but instead based on the entire matrix of instance-to-instance similarity assessments. Why do the lossy conversion to trees first? I don't think "Remarkably" is justified in the statement "Remarkably, we found that OPnets similarity judgement matches a set of data on human similarity judgement, significantly better than AlexNet" I'm not an expert on human vision, but from browsing online and from what I've learned before it seems that "object persistence" frequently relates to the concept of occlusion. Occlusion is never mentioned in this paper. I feel like the use of human vision terms might be misleading or overclaiming.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
HyWm1orEx
HJDBUF5le
Solid Contribution
This paper proposes a hierarchical generative model where the lower level consists of points within datasets and the higher level models unordered sets of datasets. The basic idea is to use a "double" variational bound where a higher level latent variable describes datasets and a lower level latent variable describes individual examples. Hierarchical modeling is an important and high impact problem, and I think that it's under-explored in the Deep Learning literature. Pros: -The few-shot learning results look good, but I'm not an expert in this area. -The idea of using a "double" variational bound in a hierarchical generative model is well presented and seems widely applicable. Questions: -When training the statistic network, are minibatches (i.e. subsets of the examples) used? -If not, does using minibatches actually give you an unbiased estimator of the full gradient (if you had used all examples)? For example, what if the statistic network wants to pull out if *any* example from the dataset has a certain feature and treat that as the characterization. This seems to fit the graphical model on the right side of figure 1. If your statistic network is trained on minibatches, it won't be able to learn this characterization, because a given minibatch will be missing some of the examples from the dataset. Using minibatches (as opposed to using all examples in the dataset) to train the statistic network seems like it would limit the expressive power of the model. Suggestions: -Hierarchical forecasting (electricity / sales) could be an interesting and practical use case for this type of model.
8: Top 50% of accepted papers, clear accept
8
-1
Sy8k9FbNg
HJDBUF5le
Interesting paper that starts to expand the repertoire of variational autoencoders
The authors introduce a variant of the variational autoencoder (VAE) that models dataset-level latent variables. The idea is clearly motivated and well described. In my mind the greatest contribution of this paper is the movement beyond the relatively simple graphical model structure of the traditional VAEs and the introduction of more interesting structures to the deep learning community. Comments: - It's not clear to me why this should be called a "statistician". Learning an approximate posterior over summary statistics is not the only imaginable way to summarize a dataset with a neural network. One could consider a maximum likelihood approach, etc. In general it felt like the paper could be more clear, if it avoided coining new terms like "statistic network" and stuck to the more accurate "approximate posterior". - The experiments are nice, and I appreciate the response to my question regarding "one shot generation". I still think that language needs to be clarified, specifically at the end of page 6. My understanding of Figure 5 is the following: Take an input set, compute the approximate posterior over the context vector, then generate from the forward model given samples from the approximate posterior. I would like clarification on the following: (a) Are the data point dependent vectors z generated from the forward model or taken from the approximate posterior? (b) I agree that the samples are of high-quality, but that is not a quantified statement. The advantage of VAEs over GANs is that we have natural ways of computing log-probabilities. To that end, one "proper" way of computing the "one shot generation" performance is to report log p(x | c) (where c is sampled from the approximate posterior) or log p(x) for held-out datasets. I suspect that log probability performance of these networks relative to a vanilla VAE without the context latent variable will be impressive. I still don't see a reason not to include that.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
rkfZJCVVl
HJDBUF5le
a nice addition to the one-/few-shot learning literature
Sorry for the late review -- I've been having technical problems with OpenReview which prevented me from posting. This paper presents a method for learning to predict things from sets of data points. The method is a hierarchical version of the VAE, where the top layer consists of an abstract context unit that summarizes a dataset. Experiments show that the method is able to "learn to learn" by acquiring the ability to learn distributions from small numbers of examples. Overall, this paper is a nice addition to the literature on one- or few-shot learning. The method is conceptually simple and elegant, and seems to perform well. Compared to other recent papers on one-shot learning, the proposed method is simpler, and is based on unsupervised representation learning. The paper is clearly written and a pleasure to read. The name of the paper is overly grandiose relative to what was done; the proposed method doesn’t seem to have much in common with a statistician, unless one means by that "someone who thinks up statistics". The experiments are well chosen, and the few-shot learning results seem pretty solid given the simplicity of the method. The spatial MNIST dataset is interesting and might make a good toy benchmark. The inputs in Figure 4 seem pretty dense, though; shouldn’t the method be able to recognize the distribution with fewer samples? (Nitpick: the red points in Figure 4 don’t seem to correspond to meaningful points as was claimed in the text.) Will the authors release the code?
8: Top 50% of accepted papers, clear accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
8
4
By5gfegEg
SJNDWNOlg
An outdated method with misleading claims.
This paper explores different strategies for instance-level image retrieval with deep CNNs. The approach consists of extracting features from a network pre-trained for image classification (e.g. VGG), and post-process them for image retrieval. In other words, the network is off-the-shelf and solely acts as a feature extractor. The post-processing strategies are borrowed from traditional retrieval pipelines relying on hand-crafted features (e.g. SIFT + Fisher Vectors), denoted by the authors as "traditional wisdom". Specifically, the authors examine where to extract features in the network (i.e. features are neurons activations of a convolution layer), which type of feature aggregation and normalization performs best, whether resizing images helps, whether combining multiple scales helps, and so on. While this type of experimental study is reasonable and well motivated, it suffers from a huge problem. Namely it "ignores" 2 major recent works that are in direct contradictions with many claims of the paper ([a] "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. and [b] "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples" by Radenović et al., both ECCV'16 papers). These works have shown that training for retrieval can be achieved with a siamese architectures and have demonstrated outstanding performance. As a result, many claims and findings of the paper are either outdated, questionable or just wrong. Here are some of the misleading claims: - "Features aggregated from these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in recent years." Until [a] (not cited), the state-of-the-art was still largely dominated by sparse invariant features based methods (see last Table in [a]). - "the proposed method [...] outperforms the state-of-the-art methods on four typical datasets" That is not true, for the same reasons than above, and also because the state-of-the-art is now dominated by [a] and [b]. - "Also in situations where a large numbers of training samples are not available, instance retrieval using unsupervised method is still preferable and may be the only option.". This is a questionable opinion. The method exposed in "End-to-end Learning of Deep Visual Representations for Image Retrieval" by Gordo et al. outperforms the state-of-the-art on the UKB dataset (3.84 without QE or DBA) whereas it was trained for landmarks retrieval and not objects, i.e. in a different retrieval context. This demonstrates that in spite of insufficient training data, training is still possible and beneficial. - Finally, most findings are not even new or surprising (e.g. aggregate several regions in a multi-scale manner was already achieved by Tolias at al, etc.). So the interest of the paper is limited overall. In addition, there are some problems in the experiments. For instance, the tuning experiments are only conducted on the Oxford dataset and using a single network (VGG-19), whereas it is not clear whether these conditions are well representative of all datasets and all networks (it is well known that the Oxford dataset behaves very differently than the Holidays dataset, for instance). In addition, tuning is performed very aggressively, making it look like the authors are tuning on the test set (e.g. see Table 3). To conclude, the paper is one year too late with respect to recent developments in the state of the art.
3: Clear rejection
3
-1
SkvcxOZVg
SJNDWNOlg
Not much utility in the paper
Authors investigate how to use pretrained CNNs for retrieval and perform an extensive evaluation of the influence of various parameters. For detailed comments on everything see the questions I posted earlier. The summary is here: I don't think we learn much from this paper: we already knew that we should use the last conv layer, we knew we should use PCA with whitening, we knew we should use original size images (authors say Tolias didn't do this as they resized the images, but they did this exactly for the same reason as authors didn't evaluate on Holidays - the images are too big. So they basically used "as large as possible" image sizes, which is what this paper effectively suggests as well), etc. This paper essentially concatenates methods that people have already used, and performs some more parameter tweaking to achieve the state-of-the-art (while the tweaking is actually performed on the test set of some of the tests). The setting of the state-of-the-art results is quite misleading as it doesn't really come from the good choice of parameters, but mainly due to the usage of the deeper VGG-19 network. Furthermore, I don't think it's sufficient to just try one network and claim these are the best practices for using CNNs for instance retrieval - what about ResNet, what about Inception, I don't know how to apply any of these conclusions for those networks, and would these conclusions even hold for them. Furthermore the parameter tweaking was done on Oxford, I really can't tell what conclusions would we get if we tuned on UKB for example. So a more appropriate paper title would be "What are the best parameter values for VGG-19 on Oxford/Paris benchmarks?" - I don't think this is sufficiently novel nor interesting for the community.
3: Clear rejection
3
-1
SJv4GtW4g
SJNDWNOlg
A paper with some good but limited and possibly slightly outdated experiments on object retrieval with CNNs
The paper conducts a detailed evaluation of different CNN architectures applied to image retrieval. The authors focus on testing various architectural choices, but do not propose or compare to end-to-end learning frameworks. Technically, the contribution is clear, particularly with the promised clarifications on how multiple scales are handled in the representation. However, I am still not entirely clear whether there would be a difference in the multi-scale settting for full and cropped queries. While the paper focuses on comparing different baseline architectures for CNN-based image retrieval, several recent papers have proposed to learn end-to-end representations specific for this task, with very good result (see for instance the recent work by Gordo et al. "End-to-end Learning of Deep Visual Representations for Image Retrieval"). The authors clarify that their work is orthogonal to papers such as Gordo et al. as they assess instead the performance of networks pre-trained from image classification. In fact, they also indicate that image retrieval is more difficult than image classification -- this is because it is performed by using features originally trained for classification. I can partially accept this argument. However, given the results in recent papers, it is clear than end-to-end training is far superior in practice and it is not clear the analysis developed by the authors in this work would transfer or be useful for that case as well.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
B13xt9b4x
rJJRDvcex
This paper proposes a cascade of paired (left/right, up/down) 1D RNNs as a module in CNNs in order to quickly add global context information to features without the need for stacking many convolutional layers. Experimental results are presented on image classification and semantic segmentation tasks. Pros: - The paper is very clear and easy to read. - Enough details are given that the paper can likely be reproduced with or without source code. - Using 1D RNNs inside CNNs is a topic that deserves more experimental exploration than what exists in the literature. Cons (elaborated on below): (1) Contributions relative to, e.g. Bell et al., are minor. (2) Disappointed in the actual use of the proposed L-RNN module versus how it's sold in the intro. (3) Classification experiments are not convincing. (1,2): The introduction states w.r.t. Bell et al. "more substantial differences are two fold: first, we treat the L-RNN module as a general block, that can be inserted into any layer of a modern architecture, such as into a residual module. Second, we show (section 4) that the L-RNN can be formulated to be inserted into a pre-trained FCN (by initializing with zero recurrence matrices), and that the entire network can then be fine-tuned end-to-end." I felt positive about these contributions after reading the intro, but then much less so after reading the experimental sections. Based on the first contribution ("general block that can be inserted into any layer"), I strongly expected to see the L-RNN block integrated throughout the CNN starting from near the input. However, the architectures for classification and segmentation only place the module towards the very end of the network. While not exactly the same as Bell et al. (there are many technical details that differ), it is close. The paper does not compare to the design from Bell et al. Is there any advantage to the proposed design? Or is it a variation that performs similarly? What happens if L-RNN is integrated earlier in the network, as suggested by the introduction? The second difference is a bit more solid, but still does not rise to a 'substantive difference' in my view. Note that Bell et al. also integrate 1D RNNs into an ImageNet pretrained VGG-16 model. I do, however, think that the method of integration proposed in this paper (zero initialization) may be more elegant and does not require two-stage training by first freezing the lower layers and then later unfreezing them. (3) I am generally skeptical of the utility of classification experiments on CIFAR-10 when presented in isolation (e.g., no results on ImageNet too). The issue is that CIFAR-10 is not interesting as a task unto itself *and* methods that work well on CIFAR-10 do not necessarily generalize to other tasks. ImageNet has been useful because, thus far, it produces features that generalize well to other tasks. Showing good results on ImageNet is much more likely to demonstrate a model that learns generalizable features. However, that is not even necessarily true, and ideally I would like to see that that a model that does well on ImageNet in fact transfers its benefit to at least one other ask (e.g., detection). One additional issue with the CIFAR experiments is that I expect to see a direct comparison of models A-F with and without L-RNN. It is hard to understand from the presented results if L-RNN actually adds much. In sum, I have a hard time taking away any valuable information from the CIFAR experiments. Minor suggestion: - Figure 4 is hard to read. The pixelated rounded corners on the yellow boxes are distracting.
5: Marginally below acceptance threshold
5
-1
S1b82bmHe
rJJRDvcex
Interesting approach to large field of view networks
Please provide an evaluation of the quality, clarity, originality and significance of this work, including a list of its pros and cons. Paper summary: this work proposes to use RNNs inside a convolutional network architecture as a complementary mechanism to propagate spatial information across the image. Promising results on classification and semantic labeling are reported. Review summary: The text is clear, the idea well describe, the experiments seem well constructed and do not overclaim. Overall it is not a earth shattering paper, but a good piece of incremental science. Pros: * Clear description * Well built experiments * Simple yet effective idea * No overclaiming * Detailed comparison with related work architectures Cons: * Idea somewhat incremental (e.g. can be seen as derivative from Bell 2016) * Results are good, but do not improve over state of the art Quality: the ideas are sound, experiments well built and analysed. Clarity: easy to read, and mostly clear (but some relevant details left out, see comments below) Originality: minor, this is a different combination of ideas well known. Significance: seems like a good step forward in our quest to learn good practices to build neural networks for task X (here semantic labelling and classification). Specific comments: * Section 2.2 “we introduction more nonlinearities (through the convolutional layers and ...”. Convolutional layers are linear operators. * Section 2.2, why exactly RNN cannot have pooling operators ? I do not see what would impede it. * Section 3 “into the computational block”, which block ? Seems like a typo, please rephrase. * Figure 2b and 2c not present ? Please fix figure or references to it. * Maybe add a short description of GRU in the appendix, for completeness ? * Section 5.1, last sentence. Not sure what is meant. The convolutions + relu and pooling in ResNet do provide non-linearities “between layers” too. Please clarify * Section 5.2.1 (and appendix A), how is the learning rate increased and decreased ? Manually ? This is an important detail that should be made explicit. Is the learning rate schedule the same in all experiments of each table ? If there is a human in the loop, what is the variance in results between “two human schedulers” ? * Section 5.2.1, last sentence; “we certainly have a strong baseline”; the Pascal VOC12 for competition 6 reports 85.4 mIoU as best known results. So no, 64.4 is not “certainly strong”. Please tune down the statement. * Section 5.2.3 Modules -> modules * The results ignore any mention of increased memory usage or computation cost. This is not a small detail. Please add a discussion on the topic. * Section 6 “adding multi-scale spatial” -> “adding spatial” (there is nothing inherently “multi” in the RNN) * Section 6 Furthermoe -> Furthermore * Appendix C, redundant with Figure 5 ?
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
ryeRthbEl
rJJRDvcex
The paper proposes a method of integrating recurrent layers within larger, potentially pre-trained, convolutional networks. The objective is to combine the feature extraction abilities of CNNs with the ability of RNNs to gather global context information. The authors validate their idea on two tasks, image classification (on CIFAR-10) and semantic segmentation (on PASCAL VOC12). On the positive side, the paper is clear and well-written (apart from some occasional typos), the proposed idea is simple and could be adopted by other works, and can be deployed as a beneficial perturbation of existing systems, which is practically important if one wants to increase the performance of a system without retraining it from scratch. The evaluation is also systematic, providing a clear ablation study. On the negative side, the novelty of the work is relatively limited, while the validation is lacking a bit. Regarding novelty, the idea of combining a recurrent layer with a CNN, something practically very similar was proposed in Bell et al (2016). There are a few technical differences (e.g. cascading versus applying in parallel the recurrent layers), but in my understanding these are minor changes. The idea of initializing the recurrent network with the CNN is reasonable but is at the level of improving one wrong choice in the original work of Bell, rather than really proposing something novel. This contribution (" we use RNNs within layers") is repeatedly mentioned in the paper (including intro & conclusion), but in my understanding was part of Bell et al, modulo minor changes. Regarding the evaluation, experiments on CIFAR are interesting, but only as proof of concept. Furthermore, as noted in my early question, Wide Residual Networks (Sergey Zagoruyko, Nikos Komodakis, BMVC16) report better results on CIFAR-10 (4% error), while not using any recurrent layers (rather using instead a wide, VGG-type, ResNet variant). So. The authors answer: "Wide Residual Networks use the depth of the network to spread the receptive field across the entire image (DenseNet (Huang et al., 2016) similarly uses depth). Thus there is no need for recurrence within layers to capture contextual information. In contrast, we show that a shallow CNN, where the receptive field would be limited, can capture contextual information within the whole image if a L-RNN is used." So, we agree that WRN do not need recurrence - and can still do better. The point of my question has practically been whether using a recurrent layer is really necessary; I can understand the answer as being "yes, if you want to keep your network shallow". I do not necessarily see why one would want to keep one's network shallow. Probably an evaluation on imagenet would bring some more insight about the merit of this layer. Regarding semantic segmentation, one of my questions has been: "Is the boost you are obtaining due to something special to the recurrent layer, or is simply because one is adding extra parameters on top of a pre-trained network? (I admit I may have missed some details of your experimental evaluation)" The answer was: "...For PASCAL segmentation, we add the L-RNN into a pre-trained network (this adds recurrence parameters), and again show that this boosts performance - more so than adding the same number of parameters as extra CNN layers - as it is able to model long-range dependences" I could not find one such experiment in the paper ('more so than adding the same number of parameters as extra CNN layers'); I understand that you have 2048 x 2048 connections for the recurrence, it would be interesting to see what you get by spreading them over (non-recurrent) residual layers. Clearly, this is not going to be my criterion for rejection/acceptance, since one can easily make it fail - but I was mostly asking for some sanity check Furthermore, it is a bit misleading to put in Table 3 FCN-8s and FCN8s-LRNN, since this gives the impression that the LRNN gives a boost by 10%. In practice the "FCN8s" prefix of "FCN8s-LRNN" is that of the authors, and not of Long et al (as indicated in Table 2, 8s original is quite worse than 8s here). Another thing that is not clear to me is where the boost comes from in Table 2; the authors mention that "when inserting the L-RNN after pool 3 and pool4 in FCN-8s, the L-RNN is able to learn contextual information over a much larger range than the receptive field of pure local convolutions. " This is potentially true, but I do not see why this was not also the case for FCN-32s (this is more a property of the recurrence rather than the 8/32 factor, right?) A few additional points: It seems like Fig 2b and Fig2c never made it into the pdf. Figure 4 is unstructured and throws some 30 boxes to the reader - I would be surprised if anyone is able to get some information out of this (why not have a table?) Appendix A: this is very mysterious. Did you try other learning rate schedules? (e.g. polynomial) What is the performance if you apply a standard training schedule? (e.g. step). Appendix C: "maps .. is" -> "maps ... are"
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
rJJHggGNx
rJq_YBqxx
Good paper, accept
The paper presents one of the first neural translation systems that operates purely at the character-level, another one being https://arxiv.org/abs/1610.03017 , which can be considered a concurrent work. The system is rather complicated and consists of a lot of recurrent networks. The quantitative results are quite good and the qualitative results are quite encouraging. First, a few words about the quality of presentation. Despite being an expert in the area, it is hard for me to be sure that I exactly understood what is being done. The Subsections 3.1 and 3.2 sketch two main features of the architecture at a rather high-level. For example, does the RNN sentence encoder receive one vector per word as input or more? Figure 2 suggests that it’s just one. The notation h_t is overloaded, used in both Subsection 3.1 and 3.2 with clearly different meaning. An Appendix that explains unambiguously how the model works would be in order. Also, the approach appears to be limited by its reliance on the availability of blanks between words, a trait which not all languages possess. Second, the results seem to be quite good. However, no significant improvement over bpe2char systems is reported. Also, I would be curious to know how long it takes to train such a model, because from the description it seems like the model would be very slow to train (400 steps of BiNNN). On a related note, normally an ablation test is a must for such papers, to show that the architectural enhancements applied were actually necessary. I can imagine that this would take a lot of GPU time for such a complex model. On the bright side, Figure 3 presents some really interesting properties that of the embeddings that the model learnt. Likewise interesting is Figure 5. To conclude, I think that this an interesting application paper, but the execution quality could be improved. I am ready to increase my score if an ablation test confirms that the considered encoder is better than a trivial baseline, that e.g. takes the last hidden state for each RNN.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
BJKwHefNl
rJq_YBqxx
Well-executed paper with good analysis but little novelty
Update after reading the authors' responses & the paper revision dated Dec 21: I have removed the comment "insufficient comparison to past work" in the title & update the score from 3 -> 5. The main reason for the score is on novelty. The proposal of HGRU & the use of the R matrix are basically just to achieve the effect of "whether to continue from character-level states or using word-level states". It seems that these solutions are specific to symbolic frameworks like Theano (which the authors used) and TensorFlow. This, however, is not a problem for languages like Matlab (which Luong & Manning used) or Torch. ----- This is a well-written paper with good analysis in which I especially like Figure 5. However I think there is little novelty in this work. The title is about learning morphology but there is nothing specifically enforced in the model to learn morphemes or subword units. For example, maybe some constraints can be put on the weights in w_i in Figure 1 to detect morpheme boundaries or some additional objective like MDL can be used (though it's not clear how these constraints can be incorporated cleanly). Moreover, I'm very surprised that litte comparison (only a brief mention) was given to the work of (Luong & Manning, 2016) [1], which trains deep 8-layer word-character models and achieves much better results on English-Czech, e.g., 19.6 BLEU compared to 17.0 BLEU achieved in the paper. I think the HGRU thing is over-complicated in terms of presentation. If I read correctly, what HGRU does is basically either continue the character decoder or reset using word-level states at boundaries, which is what was done in [1]. Luong & Manning (2016) even make it more efficient by not having to decode all target words at the morpheme level & it would be good to know the speed of the model proposed in this ICLR submission. What end up new in this paper are perhaps different analyses on what a character-based model learns & adding an additional RNN layer in the encoder. One minor comment: annotate h_t in Figure 1. [1] Minh-Thang Luong and Christopher D. Manning. 2016. Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models. ACL. https://arxiv.org/pdf/1604.00788v2.pdf
5: Marginally below acceptance threshold
5
-1
H1aZfRINx
rJq_YBqxx
A well written paper
* Summary: This paper proposes a neural machine translation model that translates the source and the target texts in an end to end manner from characters to characters. The model can learn morphology in the encoder and in the decoder the authors use a hierarchical decoder. Authors provide very compelling results on various bilingual corpora for different language pairs. The paper is well-written, the results are competitive compared to other baselines in the literature. * Review: - I think the paper is very well written, I like the analysis presented in this paper. It is clean and precise. - The idea of using hierarchical decoders have been explored before, e.g. [1]. Can you cite those papers? - This paper is mainly an application paper and it is mainly the application of several existing components on the character-level NMT tasks. In this sense, it is good that authors made their codes available online. However, the contributions from the general ML point of view is still limited. * Some Requests: -Can you add the size of the models to the Table 1? - Can you add some of the failure cases of your model, where the model failed to translate correctly? * An Overview of the Review: Pros: - The paper is well written - Extensive analysis of the model on various language pairs - Convincing experimental results. Cons: - The model is complicated. - Mainly an architecture engineering/application paper(bringing together various well-known techniques), not much novelty. - The proposed model is potentially slower than the regular models since it needs to operate over the characters instead of the words and uses several RNNs. [1] Serban IV, Sordoni A, Bengio Y, Courville A, Pineau J. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808. 2015 Jul 17.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
SJ2TV3CQl
BybtVK9lg
VAE model for LDA. Interesting idea, but a incremental.
This is an interesting paper on a VAE framework for topic models. The main idea is to train a recognition model for the inference phase which, because of so called “amortized inference” can be much faster than normal inference where inference must be run iteratively for every document. Some comments: Eqn 5: I find the notation p(theta(h)|alpha) awkward. Why not P(h|alpha) ? The generative model seems agnostic to document length, meaning that the latent variables only generate probabilities over word space. However, the recognition model is happy to radically change the probabilities q(z|x) if the document length changes because the input to q changes. This seems undesirable. Maybe they should normalize the input to the recognition network? The ProdLDA model might well be equivalent to exponential family PCA or some variant thereof: http://jmlr.csail.mit.edu/proceedings/papers/v9/li10b/li10b.pdf Section 4.1: error in the equation. The last term should be Prod_i exp(delta*_r_i) * exp((1-delta)*s_i). Last paragraph 4.1. The increment relative to NVDM seems small: approximating the Dirichlet with a Gaussian and high momentum training. While these aspects may be important in practice they are somewhat incremental. I couldn’t find the size of the vocabularies of the datasets in the paper. Does this method work well for very high dimensional sparse document representations? The comment on page 8 that the method is very sensitive to optimization tricks like very high momentum in ADAM and batch normalization is a bit worrying to me. In the end, it’s a useful paper to read, but it’s not going to be the highlight of the conference. The relative increment is somewhat small and seems to heavily rely optimization tricks.
6: Marginally above acceptance threshold
6
-1
Hy-FkbfVl
BybtVK9lg
Nice paper to read
This paper proposes the use of neural variational inference method for topic models. The paper shows a nice trick to approximate Dirichlet prior using softmax basis with a Gaussian and then the model is trained to maximize the variational lower bound. Also, the authors study a better way to alleviate the component collapsing issue, which has been problematic for continuous latent variables that follow Gaussian distribution. The results look promising and the experimental protocol sounds fine. Minor comments: Please add citation to [1] or [2] for neural variational inference, and [2] for VAE. A typo in “This approximation to the Dirichlet prior p(θ|α) is results in the distribution”, it should be “This approximation to the Dirichlet prior p(θ|α) results in the distribution” In table 2, it is written that DMFVI was trained more than 24hrs but failed to deliver any result, but why not wait until the end and report the numbers? In table 3, why are the perplexities of LDA-Collapsed Gibbs and NVDM are lower while the proposed models (ProdLDA) generates more coherent topics? What is your intuition on this? How does the training speed (until the convergence) differs by using different learning-rate and momentum scheduling approaches shown as in figure 1? It may be also interesting to add some more analysis on the latent variables z (component collapsing and etc., although your results indirectly show that the learning-rate and momentum scheduling trick removes this issue). Overall, the paper clearly proposes its main idea, explain why it is good to use NVI, and its experimental results support the original claim. It explains well what are the challenges and demonstrate their solutions. [1] Minh et al., Neural Variational Inference and Learning in Belief Networks, ICML’14 [2] Rezende et al., Stochastic Backpropagation and Approximate Inference in Deep Generative Models, ICML’14
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
SJoekNG4g
BybtVK9lg
Promising direction, but the paper needs more work
The authors propose NVI for LDA variants. The authors compare NVI-LDA to standard inference schemes such as CGS and online SVI. The authors also evaluate NVI on a different model ProdLDA (not sure this model has been proposed before in the topic modeling literature though?) In general, I like the direction of this paper and NVI looks promising for LDA. The experimental results however confound model vs inference which makes it hard to understand the significance of the results. Furthermore, the authors don't discuss hyper-parameter selection which is known to significantly impact performance of topic models. This makes it hard to understand when the proposed method can be expected to work. Can you maybe generate synthetic datasets with different Dirichlet distributions and assess when the proposed method recovers the true parameters? Figure 1: Is this prior or posterior? The text talks about sparsity whereas the y-axis reads "log p(topic proportions)" which is a bit confusing. Section 3.2: it is not clear what you mean by unimodal in softmax basis. Consider a Dirichlet on K-dimensional simplex with concentration parameter alpha/K where alpha<1 makes it multimodal. Isn't the softmax basis still multimodal? None of the numbers include error bars. Are the results statistically significant? Minor comments: Last term in equation (3) is not "error"; reconstruction accuracy or negative reconstruction error perhaps? The idea of using an inference network is much older, cf. Helmholtz machine.
The authors propose NVI for LDA variants. The authors compare NVI-LDA to standard inference schemes such as CGS and online SVI. The authors also evaluate NVI on a different model ProdLDA (not sure this model has been proposed before in the topic modeling literature though?) In general, I like the direction of this paper and NVI looks promising for LDA. The experimental results however confound model vs inference which makes it hard to understand the significance of the results. Furthermore, the authors don't discuss hyper-parameter selection which is known to significantly impact performance of topic models. This makes it hard to understand when the proposed method can be expected to work. Can you maybe generate synthetic datasets with different Dirichlet distributions and assess when the proposed method recovers the true parameters? Figure 1: Is this prior or posterior? The text talks about sparsity whereas the y-axis reads "log p(topic proportions)" which is a bit confusing. Section 3.2: it is not clear what you mean by unimodal in softmax basis. Consider a Dirichlet on K-dimensional simplex with concentration parameter alpha/K where alpha<1 makes it multimodal. Isn't the softmax basis still multimodal? None of the numbers include error bars. Are the results statistically significant? Minor comments: Last term in equation (3) is not "error"; reconstruction accuracy or negative reconstruction error perhaps? The idea of using an inference network is much older, cf. Helmholtz machine.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
-1
4
Skkn6YbNx
HJKkY35le
Clearly identifies and attacks a key problem in GANs
This paper does a good job of clearly articulating a problem in contemporary training of GANs, coming up with an intuitive solution via regularizers in addition to optimizing only the discriminator score, and conducting clever experiments to show that the regularizers have the intended effect. There are recent related and improved GAN variants (ALI, VAEGAN, potentially others), which are included in qualitative comparisons, but not quantitative. It would be interesting to see whether these other types of modified GANs already make some progress in addressing the missing modes problem. If code is available for those methods, the paper could be strengthened a lot by running the mode-missing benchmarks on them (even if it turns out that a "competing" method can get a better result in some cases). The experiments on digits and faces are good for validating the proposed regularizers. However, if the authors can show better results on CIFAR-10, ImageNet, MS-COCO or some other more diverse and challenging dataset, I would be more convinced of the value of the proposed method.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
SkMk-sH4g
HJKkY35le
Review
Summary: This paper proposes several regularization objective such as "geometric regularizer" and "mode regularizer" to stabilize the training of GAN models. Specifically, these regularizes are proposed to alleviate the mode-missing behaviors of GANs. Review: I think this is an interesting paper that discusses the mode-missing behavior of GANs and proposes new evaluation metric to evaluate this behavior. However, the core ideas of this paper are not very innovative to me. Specifically, there has been a lot of papers that combine GAN with an autoencoder and the settings of this paper is very similar to the other papers such as Larsen et al. As I pointed out in my pre-review comments, in the Larsen et al. both the geometric regularizer and model regularizer has been proposed in the context of VAEs and the way they are used is essentially the same as this paper. I understand the argument of the authors that the VAEGAN is a VAE that is regularized by GAN and in this paper the main generative model is a GAN that is regularized by an autoencoder, but at the end of the day, both the models are combining the autoencoder and GAN in a pretty much same way, and to me the resulting model is not very different. I also understand the other argument of the authors that Larsen et al is using VAE while this paper is using an autoencoder, but I am still not convinced how this paper outperforms the VAEGAN by just removing the KL term of the VAE. I do like that this paper looks at the autoencoder objective as a way to alleviate the missing mode problem of GANs, but I think that alone does not have enough originality to carry the paper. As pointed out in the public comments by other people, I also suggest that the authors do an extensive comparison of this work and Larsen et al. in terms of missing mode, sample quality and quantitative performances such as inception score.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
rkGUAyfNl
HJKkY35le
Review
The authors identify two very valid problem of mode-missing in Generative Adversarial Networks, explain their intuitions as to why these problems occur and propose ways to remedy it. The first problem is about the discriminator becoming too good (close to 0 on fake, and 1 on real data) and providing 0 gradients to the generator. The second problem is that GANs are prone to missing modes of the data generating distribution entirely. The authors propose two regularization techniques to address these problems: Geometric Metrics Regularizer and Mode Regularizer Overall, I felt that this is a good paper, providing a good analysis of the problems and proposing sensible solutions - if lacking solid from-first-principles motivation for the particular choices made. My other critique is the focus on manifolds, almost completely disregarding the probability density on the manifold - see my detailed comment below. Detailed comments on the Geometric Metrics Regularizer: The motivation for this is to provide a way to measure and penalize distance between two degenerate probability distributions concentrated on non-overlapping manifolds, those of the generator and of the real data. There are different ways one could go about measuring difference between two manifolds or probability distributions concentrated on manifolds, for example: - projection heuristic: measure the average distance between each point x on manifold A and the corresponding nearest point on manifold B (let’s call it the projection of x onto B). - earth mover’s distance: establish a smooth mapping between the two manifolds that maps denser areas on manifold A to nearby denser areas of manifold B, and measure the average distance between corresponding pairs. The two heuristics are similar but while the earth mover distance is a divergence measure for distributions, the projection heuristic only measures the divergence of the manifolds, disregarding the distributions in question. The authors propose measuring the average distance between a point on the real data manifold and a point it gets mapped to by the composition of the encoder and the generator. While E○G will map to the generative manifold, it is unclear to me if they would map to a high-probability region on that manifold, so this probably doesn’t implement anything like Earth Mover’s Distance. On this note, I have just remembered seeing this before: https://github.com/danielvarga/earth-moving-generative-net As the encoder is trained so that E○G(x) is close to x on average, it feels like a variant of the projection heuristic above. Would the authors agree with this assessment?
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
BkilGLDVg
Hy-lMNqex
Stripes and Tartan are interesting architectures but the contribution over the three previous publications on this idea is extremely small
Summary: The paper describes how the DaDianNao (DaDN) DNN accelerator can be improved by employing bit serial arithmetic. They replace the bit-parallel multipliers in DaDN with multipliers that accept the weights in parallel but the activations serially (serial x parallel multipliers). They increase the number of units keeping the total number of adders constant. This enables them to tailor the time and energy consumed to the number of bits used to represent activations. They show how their configuration can be used to process both fully-connected and convolutional layers of DNNs. Strengths: Using variable precision for each layer of the network is useful - but was previously reported in Judd (2015) Good evaluation including synthesis - but not place and route - of the units. Also this evaluation is identical to that in Judd (2016b) Weaknesses: The idea of combining bit-serial arithmetic with the DaDN architecture is a small one. The authors have already published almost everything that is in this paper at Micro 2016 in Judd (2016b). The increment here is the analysis of the architecture on fully-connected layers. Everything else is in the previous publication. The energy gains are small - because the additional flip-flop energy of shifting the activations in almost offsets the energy saved on reducing the precision of the arithmetic. The authors don’t compare to more conventional approaches to variable precision - using bit-parallel arithmetic units but data gating the LSBs so that only the relevant portion of the arithmetic units toggle. This would not provide any speedup, but would likely provide better energy gains than the bit-serial x bit-parallel approach. Overall: The Tartan and Stripes architectures are interesting but the incremental contribution of this paper (adding support for fully-connected layers) over the three previous publications on this topic, and in particular Judd (2016b) is very small. This idea is worth one good paper, not four.
5: Marginally below acceptance threshold
5
-1
ByzbXCQVl
Hy-lMNqex
consider a better venue for submission
This paper proposed a hardware accelerator for DNN. It utilized the fact that DNN are very tolerant to low precision inference and outperforms a state-of-the-art bit-parallel accelerator by 1.90x without any loss in accuracy while it is 1.17x more energy efficient. TRT requires no network retraining. It achieved super linear scales of performance with area. The first concern is that this paper doesn't seem very well-suited to ICLR. The circuit diagrams makes it more interesting for the hardware or circuit design community. The second concern is the "take-away for machine learning community", seeing from the response, the take-away is using low-precision to make inference cheaper. This is not novel enough. In last year's ICLR, there were at least 4 papers discussing using low precision to make DNN more efficient. These ideas have also been explored in the authors' previous papers.
4: Ok but not good enough - rejection
3: The reviewer is fairly confident that the evaluation is correct
4
3
ByMP2BPEl
Hy-lMNqex
Improving inference speed and energy-efficiency in (simulated) hardware implementations by exploiting per-layer differences in numerical precision requirements.
This seems like a reasonable study, though it's not my area of expertise. I found no fault with the work or presentation, but did not follow the details or know the comparable literature. There seem to be real gains to be had through this technique, though they are only in terms of efficiency in hardware, not changing accuracy on a task. The tasks chosen (Alexnet / VGG) seem reasonable. The results are in simulation rather than in actual hardware. The topic seems a little specialized for ICLR, since it does not describe any new advances in learning or representations, albeit that the CFP includes "hardware". I think the appeal among attendees will be rather limited. Please learn to use parenthetical references correctly. As is your references make reading harder.
6: Marginally above acceptance threshold
6
-1
rkqwomzrl
Hy-lMNqex
Incremental, perhaps better suited for an architecture conference (ISCA/ ASPLOS)
The authors present TARTAN, a derivative of the previously published DNN accelerator architecture: “DaDianNao”. The key difference is that TARTAN’s compute units are bit-serial and unroll MAC operation over several cycles. This enables the units to better exploit any reduction in precision of the input activations for improvement in performance and energy efficiency. Comments: 1. I second the earlier review requesting the authors to be present more details on the methodology used for estimating energy numbers for TARTAN. It is claimed that TARTAN gives only a 17% improvement in energy efficiency. However, I suspect that this small improvement is clearly within the margin of error ij energy estimation. 2. TARTAN is a derivative of DaDianNao, and it heavily relies the overall architecture of DaDianNao. The only novel aspect of this contribution is the introduction of the bit-serial compute unit, which (unfortunately) turns out to incur a severe area overhead (of nearly 3x over DaDianNao's compute units). 3. Nonetheless, the idea of bit-serial computation is certainly quite interesting. I am of the opinion that it would be better appreciated (and perhaps be even more relevant) in a circuit design / architecture focused venue.
5: Marginally below acceptance threshold
5
-1
Hkf69_ENe
Hy-lMNqex
My thoughts too
I do not feel very qualified to review this paper. I studied digital logic back in university, that was it. I think the work deserves a reviewer with far more sophisticated background in this area. It certainly seems useful. My advice is also to submit it another venue.
4: Ok but not good enough - rejection
4
-1
r1D5x7vNx
HJTXaw9gx
Good Work, Preliminary Results
This paper presents an algorithm for approximating the solution of certain time-evolution PDEs. The paper presents an interesting learning-based approach to solve such PDEs. The idea is to alternate between: 1. sampling points in space-time 2. generating solution to PDE at "those" sampled points 3. regressing a space-time function to satisfy the latter solutions at the sampled points (and hopefully generalize beyond those points). I actually find the proposed algorithm interesting, and potentially useful in practice. The classic grid-based simulation of PDEs is often too expensive to be practical, due to the curse of dimensionality. Hence, learning the solution of PDEs makes a lot of sense for practical settings. On the other hand, as the authors point out, simply running gradient descent on the regression loss function does not work, because of the non-differentiablity of the "min" that shows up in the studied PDEs. Therefore, I think the proposed idea is actually very interesting approach to learning the PDE solution in presence of non-differentability, which is indeed a "challenging" setup for numerically solving PDEs. The paper motivates the problem (time-evolution PDE with "min" operator applied to the spatial derivatives) by applications in control thery, but I think there is more direct interest in such problems for the machine learning community, and even deep learning community. For example http://link.springer.com/chapter/10.1007/978-3-319-14612-6_4 studies approximate solution to PDEs with very similar properties (evolution+"min") to develop new optimization algorithms. The latter is indeed used to training deep networks: https://arxiv.org/abs/1601.04114 I think this work would catch even more attention if the authors could show some experiments with higher-dimensional problems (where grid-based methods are absolutely inapplicable).
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
r1INGfVNl
HJTXaw9gx
somewhat interesting paper, wrong conference
Approximating solutions to PDEs with NN approximators is very hard. In particular the HJB and HJI eqs have in general discontinuous and non-differentiable solutions making them particularly tricky (unless the underlying process is a diffusion in which case the Ito term makes everything smooth, but this paper doesn't do that). What's worse, there is no direct correlation between a small PDE residual and a well performing-policy [tsitsiklis? beard? todorov?, I forget]. There's been lots of work on this which is not properly cited. The 2D toy examples are inadequate. What reason is there to think this will scale to do anything useful? There are a bunch of typos ("Range-Kutta"?) . More than anything, this paper is submitted to the wrong venue. There are no learned representations here. You're just using a NN. That's not what ICLR is about. Resubmit to ACC, ADPRL or CDC. Sorry for terseness. Despite rough review, I absolutely love this direction of research. More than anything, you have to solve harder control problems for people to take notice...
3: Clear rejection
3
-1
rku9KeJ4x
HJTXaw9gx
Hard to follow; unclear about contribution
I have no familiarity with the HJI PDE (I've only dealt with parabolic PDE's such as diffusion in the past). So the details of transforming this problem into a supervised loss escape me. Therefore, as indicated below, my review should be taken as an "educated guess". I imagine that many readers of ICLR will face a similar problem as me, and so, if this paper is accepted, at the least the authors should prepare an appendix that provides an introduction to the HJI PDE. At a high level, my comments are: 1. It seems that another disadvantage of this approach is that a new network must be trained for each new domain (including domain size), system function f(x) or boundary condition. If that is correct, I wonder if it's worth the trouble when existing tools already solve these PDE's. Can the authors shed light on a more "unifying approach" that would require minimal changes to generalize across PDE's? 2. How sensitive is the network's result to domains of different sizes? It seems only a single size 51 x 51 was tested. Do errors increase with domain size? 3. How general is this approach to PDE's of other types e.g. diffusion?
5: Marginally below acceptance threshold
5
-1
H1dYu3xre
S13wCE9xx
Not convincing
The paper considers Grassmannian SGD to optimize the skip gram negative sampling (SGNS) objective for learning better word embeddings. It is not clear why the proposed optimization approach has any advantage over the existing vanilla SGD-based approach - neither approach comes with theoretical guarantees - the empirical comparisons show marginal improvements. Furthermore, the key idea here - that of projector splitting algorithm - has been applied on numerous occasions to machine learning problems - see references by Vandereycken on matrix completion and by Sepulchre on matrix factorization. The computational cost of the two approaches is not carefully discussed. For instance, how expensive is the SVD in (7)? One can always perform an efficient low-rank update to the SVD - therefore, a rank one update requires O(nd) operations. What is the computational cost of each iteration of the proposed approach?
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
SJmfRhWEg
S13wCE9xx
still somewhat confused
Dear authors, The authors' response clarified some of my confusion. But I still have the following question: -- The response said a first contribution is a different formulation: you divide the word embedding learning into two steps, step 1 looks for a low-rank X (by Riemannian optimization), step 2 factorizes X into two matrices (W, C). You are claiming that your model outperforms previous approaches that directly optimizes over (W, C). But since the end result (the factors) is the same, can the authors provide some intuition and justification why the proposed method works better? As far as I can see, though parameterized differently, the first step of your method and previous methods (SGD) are both optimizing over low-rank matrices. Admittedly, Riemannian optimization avoids the rotational degree of freedom (the invertible matrix S you are mentioning in sec 2.3), but I am not 100% certain at this point this is the source of your gain; learning curves of objectives would help to see if Riemannian optimization is indeed more effective. -- Another detail I could not easily find is the following. You said a disadvantage of other approaches is that their factors W and C do not directly reflect similarity. Did you try to multiply the factors W and C from other optimizers and then factorize the product using the method in section 2.3, and use the new W for your downstream tasks? I am not sure if this would cause much difference in the performance. Overall, I think it is always interesting to apply advanced optimization techniques to machine learning problems. The current paper would be stronger from the machine learning perspective, if more thorough comparison and discussion (as mentioned above) are provided. On the other hand, my expertise is not in NLP and I leave it to other reviewers to decide the significance in experimental results.
5: Marginally below acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
5
3
SkD87Ly4x
S13wCE9xx
Elegant method, not sure about the practical benefits
This paper presents a principled optimization method for SGNS (word2vec). While the proposed method is elegant from a theoretical perspective, I am not sure what the tangible benefits of this approach are. For example, does using Riemannian optimization allow the model to converge faster than the alternatives? The evaluation doesn't show a dramatic advantage to RO-SGNS; the 1% difference on the word similarity benchmarks is within the range of hyperparameter effects (see "Improving Distributional Similarity with Lessons Learned from Word Embeddings", (Levy et al., 2015)). The theoretical connection to Riemannian optimization is nice though, and it might be useful for understanding related methods in the future.
6: Marginally above acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
6
3
SkgKHFBVg
SywUHFcge
Connection with generalization capabilities
Under my point of view, the robustness of a classifier against adversarial noise it is interesting if we find any relationship between that robustness and generalization to new unseen test samples. I guess that this relationship is direct in most of the problems but perhaps classifier C1 could be more robust than C2 against adv. noise but not better for new unseen samples from the task in consideration. Best results on new unseen samples are normally related to robustness against the common distortions of the data, e.g. invariance to scale, rotation… than robustness to adv. noise. I can not see any direct conclusion from table 5 results. Essentially i am not convinced about the necessity to measure the robustness against adversarial noise.
5: Marginally below acceptance threshold
5
-1
HyW89fRmx
SywUHFcge
Interesting idea, good insights, but have some flaws
This paper theoretically analyzes the adversarial phenomenon by modeling the topological relationship between the feature space of the trained and the oracle discriminate function. In particular, the (complicated) discriminant function (f) is decomposed into a feature extractor (g) and a classifier (c), where the feature extractor (g) defines the feature space. The main contribution of this paper is to propose abstract understanding and analysis for adversarial phenomenon, which is interesting and important. However, this paper also has the following problems. 1) It is not clear how the classifier c can affect the overall robustness to adversarial noises. The classifier c seems absent from the analysis, which somehow indicates that the classifier does not matter. (Please correct me if it is not true) This is counter-intuitive. For example, if we always take the input space as the feature space and the entire f as the classifier c, the strong robustness can always hold. I am also wondering if the metric d has anything to do with the classifier c. 2) A very relevant problem is how to decompose f into g and c. For examples, one can take any intermediate layer or the input space as the feature space for a neural network. Will this affect the analysis of the adversarial robustness? 3) The oracle is a good concept. However, it is hard to explicitly define it. In this paper, the feature space of the oracle is just the input image space, and the inf-norm is used as the metric. This implementation makes the algorithm in Section 4 quite similar to existing methods (though there are some detailed differences as mentioned in the discussion). Due to the above problems, I feel that some aspects of the paper are not ready. If the problems are resolved or better clarified, I believe a higher rating can be assigned to this paper. In addition, the main text of this paper is somehow too long, the arguments can be more focused if the main paper become more concise.
5: Marginally below acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5
4
Skh_8uJ4l
SywUHFcge
Flawed topological considerations, interesting practical results
This paper aims at making three contributions: - Charecterizing robustness to adversarials in a topological manner. - Connecting the topological characterization to more quantitative measurements and evaluating deep networks. - Using Siamese network training to create models robust to adversarial in a practical manner and evaluate their properties. In my opinion the paper would improve greatly if the first, topological analysis attempt would be removed from the paper altogether. A central notion of the paper is the abstract characterization of robustness. The main weakness is the notion of strong robustness itself, which is an extremely rigid notion. It requires the partitioning of the predictor function by class to match the exact partitioning of the oracle. This robustness is almost never the case in real life: it requires that the predictor is almost perfect. The main flaw however is that the output space is assumed to have discrete topology and continuity is assumed for the classifier. Continuity of the classifier wrt. a discrete output is also never really satisfied. However, if the output space is assumed to be continues values with an interesting topology (like probabilities), then the notion of strong robustness becomes so constrained and strict, that it has even less practical sense and relevance. Based on those definition, several uninteresting, trivial consequences follow. They seem to be true, with inelegant proofs, but that matters little as they seem irrelevant for any practical purposes. The second part is a well executed experiment by training a Siamese architecture with an explicit additional robustness constraint. The approach seems to be working very well, but is compared only to a baseline (stability training) which performs worse than the original model without any trainings for adversarials. This is strange as adversarial examples have been studied extensively in the past year and several methods claimed improvements over the original model not trained for robustness. The experimental section and approach would look interesting, if it were compared with a stronger baseline, however the empty theoretical definitions and analysis attempts make the paper in its current form unappealing.
3: Clear rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
3
4
H1eu-hlVl
H12GRgcxg
Training with Noisy Labels
This work address the problem of supervised learning from strongly labeled data with label noise. This is a very practical and relevant problem in applied machine learning. The authors note that using sampling approaches such as EM isn't effective, too slow and cannot be integrated into end-to-end training. Thus, they propose to simulate the effects of EM by a noisy adaptation layer, effectively a softmax, that is added to the architecture during training, and is omitted at inference time. The proposed algorithm is evaluated on MNIST and shows improvements over existing approaches that deal with noisy labeled data. A few comments. 1. There is no discussion in the work about the increased complexity of training for the model with two softmaxes. 2. What is the rationale for having consecutive (serialized) softmaxes, instead of having a compound objective with two losses, or a network with parallel losses and two sets of gradients? 3. The proposed architecture with only two hidden layers isn't not representative of larger and deeper models that are practically used, and it is not clear that shown results will scale to bigger networks. 4. Why is the approach only evaluated on MNIST, a dataset that is unrealistically simple.
5: Marginally below acceptance threshold
5
-1
BkL101z4e
H12GRgcxg
Interesting paper but lack of experiments
The paper addressed the erroneous label problem for supervised training. The problem is well formulated and the presented solution is novel. The experimental justification is limited. The effectiveness of the proposed method is hard to gauge, especially how to scale the proposed method to large number of classification targets and whether it is still effective. For example, it would be interesting to see whether the proposed method is better than training with only less but high quality data. From Figure 2, it seems with more data, the proposed method tends to behave very well when the noise fraction is below a threshold and dramatically degrades once passing that threshold. Analysis and justification of this behavior whether it is just by chance or an expected one of the method would be very useful.
7: Good paper, accept
7
-1
BksCTxHEe
H12GRgcxg
This paper investigates how to make neural nets be more robust to noise in the labels
This paper looks at how to train if there are significant label noise present. This is a good paper where two main methods are proposed, the first one is a latent variable model and training would require the EM algorithm, alternating between estimating the true label and maximizing the parameters given a true label. The second directly integrates out the true label and simply optimizes the p(z|x). Pros: the paper examines a training scenario which is a real concern for big dataset which are not carefully annotated. Cons: the results on mnist is all synthetic and it's hard to tell if this would translate to a win on real datasets. - comments: Equation 11 should be expensive, what happens if you are training on imagenet with 1000 classes? It would be nice to see how well you can recover the corrupting distribution parameter using either the EM or the integration method. Overall, this is an OK paper. However, the ideas are not novel as previous cited papers have tried to handle noise in the labels. I think the authors can make the paper better by either demonstrating state-of-the-art results on a dataset known to have label noise, or demonstrate that a method can reliably estimate the true label corrupting probabilities.
5: Marginally below acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5
4
HysiJjZEe
HJStZKqel
This paper proposes an extension of TerpreT by adding a set of functions that can deal with inputs in the form of tensor with continuous values. This potentially allows TerpreT to learn programs over images or other “natural” sources. TerpreT generates a source code from a set of input/output examples. The code is generate in the form of a TensorFlow computation graph based on a set of simple and elegant program representations. One of the limitation of TerpreT is the type of inputs it can work with, this work aim at enriching it by adding “learnable functions” that can deal with more complex input variables. While I really like this direction of research and the development of TerpreT, I find the contribution of this work to be a bit limited. This would have been fine if it was supported by a strong and convincing experimental section, but unfortunately, the experimental section is a bit weak: the tasks studied are relatively simple and the baselines are not very strong. For example let us consider the SUM2x2 problem: all the images of digits are from MNIST, which can be classify with an error of 8% with a linear model (and even better with neural networks), There is also a linear model that given 4 numbers will compute the 2x2sum of them that is: y=Ax where x is the vector containing the 4 numbers and A = [1 0 1 0;1 1 0 0;1 0 0 1; 0 1 0 1]. This means a succession of two linear models can solve the sum2x2 problems with little trouble. While I'm aware that this work aims at automatically finding the combination of simple models to achieve this task end-to-end, the fact that the solution is a set of 2 consecutive linear models makes it a bit too simple in my humble opinion. Overall, I think that this paper proposes a promising extension of TerpreT that is unfortunately not backed by experiments that are convincing enough.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
Hkw5qHb4x
HJStZKqel
Final Review: Fine idea but very basic tasks, weak baselines, and misleading presentation
The authors explore the idea of life-long learning in the context of program generation. The main weakness of this paper is that it mixes a few issues without showing strong results on any of them. The test tasks are about program generation, but these are toy tasks even by the low standards of deep-learning for program generation (except for the MATH task, they are limited to 2x2 grid). Even on MATH, the authors train and discuss generalization from 2-digit expressions -- these are very short, so the conclusiveness of the experiment is unclear. The main point of the paper is supposed to be transfer learning though. Unluckily, the authors do not compare to other transfer learning models (e.g., "Progressive Neural Networks") nor do they test on tasks that were previously used by others. We find that only testing on a newly-created task with a weak baseline is not sufficient for ICLR acceptance. After clarifying comments from the authors and more experiments (see the discussion above), I'm now convinced that the authors mostly measure overfitting, which in their model is prevented because the model is hand-fitted to the task. While the idea might still be valid and interesting, many harder and much more diverse experiments are needed to verify it. I consider this paper a clear rejection at present.
2: Strong rejection
2
-1
BJJjahzEe
HJStZKqel
Review and review update
I think the paper is a bit more solid now and I still stand by my positive review. I do however agree with other reviewers that the tasks are very simple. While NPI is trained with stronger supervision, it is able to learn quicksort perfectly as shown by Dawn Song and colleagues in this conference. Reed et al had already demonstrated it for bubblesort. If the programs are much shorter, it becomes easy to marginalise over latent variables (pointers) and solve the task end to end. The failure to attack much longer combinatorial problems is my main complaint about this paper, because it makes one feel that it is over-claiming. In relation to the comments concerning NPI, Reed et al freeze the weights of the core LSTM to then show that an LSTM with fixed weights can continue learning new programs that re-use the existing programs (ie the trained model can create new programs). However, despite this criticism, I still think this is an excellent paper, illustrating the power of combining traditional programming with neural networks. It is very promising and I would love to see it appear at ICLR. =========== This paper makes a valuable contribution to the emerging research area of learning programs from data. The authors mix their TerpreT framework, which enables them to compile programs with finite integer variables to a (differentiable) TensorFlow graph, and neural networks for perceiving simple images. This is made possible through the use of simple tapes and arithmetic tasks. In these arithmetic tasks, two networks are re-used, one for digits and one for arithmetic operations. This clean setup enables the authors to demonstrate not only the avoidance of catastrophic interference, but in fact some reverse transfer. Overall, this is a very elegant and potentially very useful way to combine symbolic programming with neural networks. As a full-fledged tool, it could become very useful. Thus far it has only been demonstrated on very simple examples. It would be nice for instance to see it demonstrated in all the tasks introduced in other approaches to neural programming and induction: sorting, image manipulation, semantic parsing, question answering. Hopefully, the authors will release neural TerpreT to further advance research in this domain.
8: Top 50% of accepted papers, clear accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
8
4
rySdVpB4e
HkYhZDqxg
This paper proposes a variant of a recurrent neural network that has two orthogonal temporal dimensions that can be used as a decoder to generate tree structures (including the topology) in an encoder-decoder setting. The architecture is well motivated and I can see several applications (in addition to what's presented in the paper) that need to generate tree structures given an unstructured data. One weakness of the paper is the limitation of experiments. IFTTT dataset seems to be an interesting appropriate application, and there is also a synthetic dataset, however it would be more interesting to see more natural language applications with syntactic tree structures. Still, I consider the experiments sufficient as a first step to showcase a novel architecture. A strength is that the authors experiment with different design decisions when building the topology predictor components of the architecture, about when / how to decide to terminate, as opposed to making a single arbitrary choice. I see future applications of this architecture and it seems to have interesting directions for future work so I suggest its acceptance as a conference contribution.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
rJeK5Tz4x
HkYhZDqxg
review
The paper propose DRNN as a neural decoder for tree structures. I like the model architecture since it has two clear improvements over traditional approaches — (1) the information flows in two directions, both from the parent and from siblings, which is desirable in tree structures (2) the model use a probability distribution to model the tree boundary (i.e. the last sibling or the leaf). This avoids the use of special ending symbols which is larger in size and putting more things to learn for the parameters (shared with other symbols). The authors test the DRNN using the tasks of recovering the synthetic trees and recovering functional programs. The model did better than traditional methods like seq2seq models. I think the recovering synthetic tree task is not very satisfying for two reasons — (1) the surface form itself already containing some of the topological information which makes the task easier than it should be (2) as we can see from figure 3, when the number of nodes grows (even to a number not very large), the performance of the model drops dramatically, I am not sure if a simple baseline only captures the topological information in the surface string would be much worse than this. And DRNN in this case, seems can’t show its full potentials since the length of the information flow in the model won’t be very long. I think the experiments are interesting. But I think there are some other tasks which are more difficult and the tree structure information are more important in such tasks. For example, we have the seq2seq parsing model (Vinyals et al, 2014), is it possible to use the DRNN proposed here on the decoder side? I think tasks like this can show more potentials of the DRNN and can be very convincing that model architectures like this are better than traditional alternatives.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
rJpv5lzEx
HkYhZDqxg
Accept
Authors' response well answered my questions. Thanks. Evaluation not changed. ### This paper proposes a neural model for generating tree structure output from scratch. The model does 1) separate the recurrence between depths and siblings; 2) separate the topology and label generation, and outperforms previous methods on a benchmark IFTTT dataset. Compared to previous tree-decoding methods, the model avoids manually annotating subtrees with special tokens, and thus is a very good alternative to such problems. The paper does solid experiments on one synthetic dataset, and outperforms alternative methods on one real-world IFTTT dataset. There are couple of interesting results in the paper that I believe is worth further investigation. Firstly, on the synthetic dataset, the precision drops rapidly with the number of nodes. Is it because that the vector representation of the sequential encoder fails to provide sufficient information of long sequences, such that the tree decoder can not do a good job? Or is it because that such tree decoder is not tolerant to the long sequence input, i.e., large tree structure? I believe that it is important to understand this before a better model can be developed. For example, if it is the fault of encoder, maybe an attention layer can be added, as in a seq-to-seq model, to preserve more information of the input sequence. Moreover, besides only showing how the precision changes with the number of nodes in the tree, it might be interesting to investigate how it goes with 1) number of depths; 2) number of widths; 3) symmetricity; etc. Moreover, as greedy search is used in decoding, it might be interesting to see how it helps, if it does, to use beam-search in tree decoding. On the IFTTT dataset, listing more statistics about this dataset might be helpful for better understanding the difficulty of this task. How deep are the trees? How large are the vocabularies on both language and program sides? The paper is well written, except for minor typo as mentioned in my pre-review questions. In general, I believe this is a solid paper, and more can be explored in this direction. So I tend to accept it.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
rJkNApWNx
HkzuKpLgg
review for Efficient Communications in Training Large Scale Neural Networks
This paper analyzes the ring-based AllReduce approach for multi-GPU data parallel training of deep net. Comments 1) The name linear pipeline is somewhat confusing to the readers, as the technique is usually referred as ring based approach in Allreduce literature. The author should use the standard name to make the connection easier. 2) The cost analysis of ring-based Allreduce is already provided in the existing literature. This paper applied the analysis to the case of multi-GPU deep net training, and concluded that the scaling is invariant of number of GPUs. 3) The ring-based allreduce approach is already supported by NVidia’s NCCL library, although the authors claim that their implementation comes earlier than the NCCL implementation. 4) The overlap of communication of computation is an already applied technique in systems such as TensorFlow and MXNet. The schedule proposed by the authors exploits the overlap partially, doing backprop of t-1 while doing reduce. Note that the dependency pattern can be further exploited; with the forward of layer t depend on update of parameter of layer t in last iteration. This can be done by a dependency scheduler. 5) Since this paper is about analysis of Allreduce, it would be nice to include detailed analysis of tree-shape reduction, ring-based approach and all-to-all approach. The discussion of all-to-all approach is missing in the current paper. In summary, this is a paper discussed existing Allreduce techniques for data parallel multi-GPU training of deep net, with cost analysis based on existing results. While I personally find the claimed result not surprising as it follows from existing analysis of Allreduce, the analysis might help some other readers. I view this as a baseline paper. The analysis of Allreduce could also been improved (see comment 5).
5: Marginally below acceptance threshold
5
-1
BkRqLgPNl
HkzuKpLgg
This paper presents a linear pipeline All-reduce approach for parallel neural networks on multiple GPU. The paper provides both theoretical analysis and experiments. Overall, the results presented in the paper are interesting, but the writing can be improved. Comments: - The authors compare their proposed approach with several alternative approaches and demonstrate strong performance of the proposed approaches. But it is unclear if the improvement is from the proposed approach or from the implementation. - The paper is not easy to follow and the writing can be improved in many place (aside from typos and missing references). Specifically, the authors should provide more intuitions of the proposed approach in the introduction and in Section 3. - The proposition and the analysis in Section 3.2 do not suggest the communication cost of linear pipeline is approximately 2x and log p faster than BE and MST, respectively, as claimed in many places in the paper. Instead, it suggests LP *cannot* be faster than these methods by 2x and log p times. More specifically, Eq (2) shows T_broadcase_BE/ T_broadcase_LP < 2. This does not provide an upper-bound of T_broadcase_LP and it can be arbitrary worse when comparing with T_broadcase_BE from this inequality. Therefore, instead of showing T_broadcase_BE/ T_broadcase_LP < 2, the authors should state T_broadcase_BE/ T_broadcase_LP > 1 when n approaches infinity. - It would be interesting to emphasize more on the differences between designing parallel algorithms on CPU v.s. on GPU to motivate the paper.
5: Marginally below acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
5
3
BJ_JRc4Nl
rJzaDdYxx
review: lacking experimental comparison to prior work
This paper proposes a new method, interior gradients, for analysing feature importance in deep neural networks. The interior gradient is the gradient measured on a scaled version of the input. The integrated gradient is the integral of interior gradients over all scaling factors. Visualizations comparing integrated gradients with standard gradients on real images input to the Inception CNN show that integrated gradients correspond to an intuitive notion of feature importance. While motivation and qualitative examples are appealing, the paper lacks both qualitative and quantitative comparison to prior work. Only the baseline (simply the standard gradient) is presented as reference for qualitative comparison. Yet, the paper cites numerous other works (DeepLift, layer-wise relevance propagation, guided backpropagation) that all attack the same problem of feature importance. Lack of comparison to any of these methods is a major weakness of the paper. I do not believe it is fit for publication without such comparisons. My pre-review question articulated this same concern and has not been answered.
3: Clear rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
3
4
rkGtUbzEl
rJzaDdYxx
review
The authors propose to measure “feature importance”, or specifically, which pixels contribute most to a network’s classification of an image. A simple (albeit not particularly effective) heuristic for measuring feature importance is to measure the gradients of the predicted class wrt each pixel in an input image I. This assigns a score to each pixel in I (that ranks how much the output prediction would change if a given pixel were to change). In this paper, the authors build on this and propose to measure feature importance by computing gradients of the output wrt scaled version of the input image, alpha*I, where alpha is a scalar between 0 and 1, then summing across all values of alpha to obtain their feature importance score. Here the scaling is simply linear scaling of the pixel values (alpha=0 is all black image, alpha=1 is original image). The authors call these scaled images “counterfactuals” which seems like quite an unnecessarily grandiose name for literally, a scaled image. The authors show a number of visualizations that indicate that the proposed feature importance score is more reasonable than just looking at gradients only with respect to the original image. They also show some quantitative evidence that the pixels highlighted by the proposed measure are more likely to fall on the objects rather than spurious parts of the image (in particular, see figure 5). The method is also applied to other types of networks. The quantitative evidence is quite limited and most of the paper is spent on qualitative results. While the goal of understanding deep networks is of key importance, it is not clear whether this paper really help elucidate much. The main interesting observation in this paper is that scaling an image by a small alpha (i.e. creating a faint image) places more “importance” on pixels on the object related to the correct class prediction. Beyond that, the paper builds a bit on this, but no deeper insight is gained. The authors propose some hand-wavy explanation of why using small alpha (faint image) may force the network to focus on the object, but the argument is not convincing. It would have been interesting to try to probe a bit deeper here, but that may not be easy. Ultimately, it is not clear how the proposed scheme for feature importance ranking is useful. First, it is still quite noisy and does not truly help understand what a deep net is doing on a particular image. Performing a single gradient descent step on an image (or on the collection of scaled versions of the image) hardly begins to probe the internal workings of a network. Moreover, as the authors admit, the scheme makes the assumption that each pixel is independent, which is clearly false. Considering the paper presents a very simple idea, it is far too long. The main paper is 14 pages, up to 19 with references and appendix. In general the writing is long-winded and overly verbose. It detracted substantially from the paper. The authors also define unnecessary terminology. “Gradients of Coutnerfactuals” sounds quite fancy, but is not very related to the ideas explored in the writing. I would encourage the authors to tighten up the writing and figures down to a more readable page length, and to more clearly spell out the ideas explored early on.
3: Clear rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
3
4
B1qHBhx4x
rJzaDdYxx
Scaling input samples.
This work proposes to use visualization of gradients to further understand the importance of features (i.e. pixels) for visual classification. Overall, this presented visualizations are interesting, however, the approach is very ad hoc. The authors do not explain why visualizing regular gradients isn't correlated with the importance of features relevant to the given visual category and proceed to the interior gradient approach. One particular question with regular gradients at features that form the spatial support of the visual class. Is it the case that the gradients of the features that are confident of the prediction remain low, while those with high uncertainty will have strong gradients? With regards to the interior gradients, it is unclear how the scaling parameter \alpha affects the feature importance and how it is related to attention. Finally, does this model use batch normalization?
5: Marginally below acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5
4
ByVu2rZNe
r1G4z8cge
Good paper that could use a few more experiments
The authors show that the idea of smoothing a highly non-convex loss function can make deep neural networks easier to train. The paper is well-written, the idea is carefully analyzed, and the experiments are convincing, so we recommend acceptance. For a stronger recommendation, it would be valuable to perform more experiments. In particular, how does your smoothing technique compare to inserting probes in various layers of the network? Another interesting question would be how it performs on hard-to-optimize tasks such as algorithm learning. For example, in the "Neural GPU Learns Algorithms" paper the authors had to relax the weights of different layers of their RNN to make it optimize -- could this be avoided with your smoothing technique?
7: Good paper, accept
7
-1
rJF_QlzVl
r1G4z8cge
Interesting direction but requires improvements
This paper first discusses a general framework for improving optimization of a complicated function using a series of approximations. If the series of approximations are well-behaved compared to the original function, the optimization can in principle be sped up. This is then connected to a particular formulation in which a neural network can behave as a simpler network at high noise levels but regain full capacity as training proceeds and noise lowers. The idea and motivation of this paper are interesting and sound. As mentioned in my pre-review question, I was wondering about the relationship with shaping methods in RL. I agree with the authors that this paper differs from how shaping typically works (by modifying the problem itself) because in their implementation the architecture is what is "shaped". Nevertheless, the central idea in both cases is to solve a series of optimization problems of increasing difficulty. Therefore, I strongly suggest including a discussion of the differences between shaping, curriculum learning (I'm also not sure how this is different from shaping), and the present approach. The presentation of the method for neural networks lacks clarity in presentation. Improving this presentation will make this paper much easier to digest. In particular: - Alg. 1 can not be understood at the point that it is referenced. - Please explain the steps to Eq. 25 more clearly and connect to steps 1-6 in Alg. 1. - Define u(x) clearly before defining u*(x) There are several concerns with the experimental evaluations. There should be a discussion about why doesn't the method work for solving much more challenging network training problems, such as thin and deep networks. Some specific concerns: - The MLPs trained (Parity and Pentomino) are not very deep at all. An experiment of training thin networks with systematically increasing depth would be a better fit to test this method. Network depth is well known to pose optimization challenges. Instead, it is stated without reference that "Learning the mapping from sequences of characters to the word-embeddings is a difficult problem." - For cases where the gain is primarily due to the regularization effect, this method should be compared to other weight noise regularization methods. - I also suggest comparing to highway networks, since there are thematic similarities in Eq. 22, and it is possible that they can automatically anneal their behavior from simple to complex nets during training, considering that they are typically initialized with a bias towards copying behavior. - For CIFAR-10 experiment, does the mollified model also use Residual connections? If so, why? In either case, why does the mollified net actually train slower than the residual and stochastic depth networks? This is inconsistent with the MLP results. Overall, the ideas and developments in this paper are promising, but it needs more work to be a clear accept for me.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
SkxpA6IEx
r1G4z8cge
interesting view on improving the optimization of neural networks, proposed practical mollifiers seem quite engineered
The paper shows the relation between stochastically perturbing the parameter of a model at training time, and considering a mollified objective function for optimization. Aside from Eqs. 4-7 where I found hard to understand what the weak gradient g exactly represents, Eq. 8 is intuitive and the subsequent Section 2.3 clearly establishes for a given class of mollifiers the equivalence between minimizing the mollified loss and training under Gaussian parameter noise. The authors then introduce generalized mollifiers to achieve a more sophisticated annealing effect applicable to state-of-the-art neural network architectures (e.g. deep ReLU nets and LSTM recurrent networks). The resulting annealing effect can be counterintuitive: In Section 4, the Binomial (Bernoulli?) parameter grows from 0 (deterministic identity layers) to 1 (deterministic ReLU layers), meaning that the network goes initially through a phase of adding noise. This might effectively have the reverse effect of annealing. Annealing schemes used in practice seem very engineered (e.g. Algorithm 1 that determines how units are activated at a given layer consists of 9 successive steps). Due to the more conceptual nature of the authors contribution (various annealing schemes have been proposed, but the application of the mollifying framework is original), it could have been useful to reserve a portion of the paper to analyze simpler models with more basic (non-generalized) mollifiers. For example, I would have liked to see simple cases, where the perturbation schemes derived from the mollifier framework would be demonstrably more suitable for optimization than a standard heuristically defined perturbation scheme.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
SyJMcUrNg
HyNxRZ9xg
An incremental paper with minor contributions and weak baselines
The paper proposes a way to learn continuous features for input data which consists of multiple categorical data. The idea is to embed each category in a learnable low dimensional continuous space, explicitly compute the pair-wise interaction among different categories in a given input sample (which is achieved by either taking a component-wise dot product or component-wise addition), perform k-max pooling to select a subset of the most informative interactions, and repeat the process some number of times, until you get the final feature vector of the given input. This feature vector is then used as input to a classifier/regressor to accomplish the final task. The embeddings of the categories are learnt in the usual way. In the experiment section, the authors show on a synthetic dataset that their procedure is indeed able to select the relevant interactions in the data. On one real world dataset (iPinYou) the model seems to outperform a couple of simple baselines. My major concern with this paper is that their's nothing new in it. The idea of embedding the categorical data having mixed categories has already been handled in the past literature, where essentially one learns a separate lookup table for each class of categories: an input is represented by concatenation of the embeddings from these lookup table, and a non-linear function (a deep network) is plugged on top to get the features of the input. The only rather marginal contribution is the explicit modeling of the interactions among categories in equations 2/3/4/5. Other than that there's nothing else in the paper. Not only that, I feel that these interactions can (and should) automatically be learned by plugging in a deep convolutional network on top of the embeddings of the input. So I'm not sure how useful the contribution is. The experimental section is rather weak. They authors test their method on a single real world data set against a couple of rather weak baselines. I would have much preferred for them to evaluate against numerous models proposed in the literature which handle similar problems, including wsabie. While the authors argued in their response that wsabie was not suited for their problem, i strongly disagree with that claim. While the original wsabie paper showed experiments using images as inputs, their training methodology can easily be extended to other types of data sets, including categorical data. For instance, I conjecture that the model i proposed above (embed all the categorical inputs, concatenate the embeddings, plug a deep conv-net on top and train using some margin loss) will perform as well if not better than the hand coded interaction model proposed in this paper. Of course I could be wrong, but it would be far more convincing if their model was tested against such baselines.
5: Marginally below acceptance threshold
5
-1
By2jdClSg
HyNxRZ9xg
Weak comparison with baselines
A method for click prediction is presented. Inputs are a categorical variables and output is the click-through-rate. The categorical input data is embedded into a feature vector using a discriminative scheme that tries to predict whether a sample is fake or not. The embedding vector is passed through a series of SUM/MULT gates and K-most important interactions are identified (K-max pooling). This process is repeated multiple times (i.e. multiple layers) and the final feature is passed into a fully connected layer to output the click prediction rate. Authors claim: (1) Use of gates and K-max pooling allow modeling of interactions that lead to state of art results. (2) It is not straightforward to apply ideas in papers like word2vec to obtain feature embeddings and consequently they use the idea of discriminating between fake and true samples for feature learning. Theoretically convolutions can act as “sum” gates between pairs of input dimensions. Authors make these interactions explicit (i.e. imposed structure) by using gates. Now, the merit of the proposed method can be tested if a network using gates outperforms a network without gates. This baseline is critically missing – i.e. Embedding Vector followed by a series of convolution/pooling layers. Another related issue is that I am not sure if the number of parameters in the proposed model and the baseline models is similar or not. For instance – what is the total number of parameters in the CCPM model v/s the proposed model? Overall, there is no new idea in the paper. This by itself is not grounds for rejection if the paper outperforms established baselines. However, such comparison is weak and I encourage authors to perform these comparisons.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
Bkh10nWEl
HyNxRZ9xg
A decent paper but with some issues.
In this paper, the author proposed an approach for feature combination of two embeddings v1 and v2. This is done by first computing the pairwise combinations of the elements of v1 and v2 (with complicated nonlinearity), and then pick the K-Max as the output vector. For triple (or higher-order) combinations, two (or more) consecutive pairwise combinations are performed to yield the final representations. It seems that the approach is not directly related to categorical data, and can be applied to any embeddings (even if they are not one-hot). So is there any motivation that brings about this particular approach? What is the connection? There are many papers with similar ideas. CCPM (A convolutional click prediction model) that the authors have compared against, also proposes very similar network structure (conv + K-max + conv + K-max). In the paper, the author does not mention their conceptual similarity and difference versus CCPM. Compact Bilinear Pooling, https://arxiv.org/abs/1511.06062 has been proposed a year ago and yields state-of-the-art performances in Visual Question Answering https://arxiv.org/abs/1606.01847. So the author might need to compare against those methods. I understand that the proposed approach incorporates more nonlinear operations (rather than bilinear) in pairwise combination, but it is not clear whether bilinear operations is sufficient to achieve the same level of performance, and whether complicated operations (e.g., Eqn. 4) are needed. In the experiment, the performance seems to be not as impressive. There is about 1%-2% difference in performance between the proposed approach and baselines (e.g., in Tbl. 2, and Tbl. 3). Is that a big deal for click-rate prediction? When comparing among LR, FFM, CCPM, FNN, and proposed approach, the number of parameters (i.e., model capacity) are not shown. This could be unfair since the proposed model could have more parameters (note that the authors seems to misunderstand the questions). Besides, claiming that previous approaches does not learn representations seems to be a bit restrictive, since typical deep models learn the representation implicitly (e.g., CCPM and FNN listed in the paper as baselines).
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
ByqijxHNg
HJ5PIaseg
Why we should use yet another dialogue system as an evaluation metric?
This paper propose a new evaluation metric for dialogue systems, and show it has a higher correlation with human annotation. I agree the MT based metrics like BLEU are too simple to capture enough semantic information, but the metric proposed in this paper seems to be too compliciated to explain. On the other hand, we could also use equation 1 as a retrieval based dialogue system. So what is suggested in this paper is basically to train one dialogue model to evaluate another model. Then, the high-level question is why we should trust this model? This question is also relevant to the last item of my detail comments. Detail comments: - How to justify what is captured/evaluated by this metric? In terms of BLEU, we know it actually capture n-gram overlap. But for this model, I guess it is hard to say what is captured. If this is true, then it is also difficult to answer the question like: will the data dependence be a problem? - why not build model incrementally? As shown in equation (1), this metric uses both context and reference to compute a score. Is it possible to show the score function using only reference? It will guarantee this metric use the same information source as BLEU or ROUGE. - Another question about equation (1), is it possible to design the metric to be a nonlinear function. Since from what I can tell, the comparison between BLEU (or ROUGE) and the new metric in Figure 3 is much like a comparison between the exponential scale and the linear scale. - I found the two reasons in section 5.2 are not convincing if we put them together. Based on these two reasons, I would like to see the correlation with average score. A more reasonable way is to show the results both with and without averaging. - In table 6, it looks like the metric favors the short responses. If that is true, this metric basically does the opposite of BLEU, since BLEU will panelize short sentences. On the other hand, human annotators also tends to give short respones high scores, since long sentences will have a higher chance to contain some irrelevant words. Can we eliminate the length factor during the annotation? Otherwise, it is not surprise that the correlation.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
SyoJ_FK4l
HJ5PIaseg
The idea is good, and the problem to address is very important. However, the proposed solution is short of desired one.
This paper addresses the issue of how to evaluate automatic dialogue responses. This is an important issue because current practice to automatically evaluate (e.g. BLEU, based on N-gram overlap, etc.) is NOT correlated well with the desired quality (i.e. human annotation). The proposed approach is based on the use of an LSTM encoding of dialogue context, reference response and model response with appropriate scoring, with the essence of training one dialogue model to evaluate another model. However, the proposed solution depends on a reasonably good dialogue model to begin with, which is not guaranteed, rendering the new metric possibly meaningless.
5: Marginally below acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5
4
r1s67UeVl
HJ5PIaseg
The main idea of the paper is to learn the evaluation of dialogue responses in order to overcome limitations of current schemes such as BLEU
Overall the paper address an important problem: how to evaluate more appropriately automatic dialogue responses given the fact that current practice to automatically evaluate (BLEU, METEOR, ...) is often insufficient and sometimes misleading. The proposed approach using an LSTM-based encoding of dialogue context, reference response and model response(s) that are then scored in a linearly transformed space. While the overall approach is simple it is also quite intuitiv and allows end-to-end training. As the authors rightly argue simplicity is a feature both for interpretation as well as for speed. The experimental section reports on quite a range of experiments that seem fine to me and aim to convince the reader about the applicability of the approach. As mentioned also by others more insights from the experiments would have been great. I mentioned an in-depth failure case analysis and I would also suggest to go beyond the current dataset to really show generalizability of the proposed approach. In my opinion the paper is somewhat weaker on that front that it should have been. Overall I like the ideas put forward and the approach seems sensible though and the paper can thus be accepted.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
rkVScXzEl
r17RD2oxe
Intellectually interesting but I'm not sure what the real contribution is
I like this paper in that it is a creative application of computer vision to Biology. Or, at least, that would be a good narrative but I'm not confident biologists would actually care about the "Tree of Life" built from this method. There's not really any biology in this paper, either in methodology or evaluation. It boils down to a hierarchical clustering of visual categories with ground truth assumed to be the WordNet hierarchy (which may or may not be the biological ground truth inheritance relationships between species, if that is even possible to define -- it probably isn't for dog species which interbreed and it definitely isn't for vehicles) or the actual biological inheritance tree or what humans would do in the same task. If we're just worried about visual relationships and not inheritance relationships then a graph is the right structure, not a tree. A tree is needlessly lossy and imposes weird relationships (e.g. ImageNet has a photo of a "toy rabbit" and by tree distance it is maximally distant from "rabbit" because the toy is in the devices top level hierarchy and the real rabbit is in the animal branch. Are those two images really as semantically unrelated as is possible?). Our visual world is not a hierarchy. Our biological world can reasonably be defined as one. One could define the task of trying to recover the biological inheritance tree from visual inputs, although we know that would be tough to do because of situations like convergent evolution. Still, one could evaluate how well various visual features can recover the hierarchical relationship of biological organisms. This paper doesn't quite do that. And even if it did, it would still feel like a bit of a solution in search of a problem. The paper says that this type of exercise can help us understand deep features, but I'm not sure sure how much it reveals. I guess it's a fair question to ask if a particular feature produces meaningful class-to-class distances, but it's not clear that the biological tree of life or the wordnet hierarchy is the right ground truth for that (I'd argue it's not). Finally, the paper mentions human baselines in a few places but I'm not really seeing it. "Experiments show that the proposed method using deep representation is very competitive to human beings in building the tree of life based on the visual similarity of the species." and then later "The reconstructed quality is as good as what human beings could reconstruct based on the visual similarity." That's the extent of the experiment? A qualitative result and the declaration that it's as good as humans could do?
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
rJi_oCBVx
r17RD2oxe
Nice application of using deep features but lack technical novelty
This paper introduces a hierarchical clustering method using learned CNN features to build 'the tree of life'. The assumption is that the feature similarity indicates the distance in the tree. The authors tried three different ways to construct the tree: 1) approximation central point 2) minimum spanning tree and 3) multidimensional scaling based method. Out of them, MDS works the best. It is a nice application of using deep features. However, I lean toward rejecting the paper because the following reasons: 1) All experiments are conducted in very small scale. The experiments include 6 fish species, 11 canine species, 8 vehicle classes. There are no quantitative results, only by visualizing the generated tree versus the wordNet tree. Moreover, the assumption of using wordNet is not quite valid. WordNet is not designed for biology purpose and it might not reflect the true evolutionary relationship between species. 2) Limited technical novelty. Most parts of the pipeline are standard, e.g. use pretrained model for feature extraction, use previous methods to construct hierarchical clustering. I think the technical contribution of this paper is very limited.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
HkptTW8Vl
r17RD2oxe
concerns about both contributions
The paper presents a simple method for constructing a visual hierarchy of ImageNet classes based on a CNN trained on discriminate between the classes. It investigates two metrics for measuring inter-class similarity: (1) softmax probability outputs, i.e., the class confusion matrix, and (2) L2 distance between fc7 features, along with three methods for constructing the hierarchy given the distance matrix: (1) approximation central point, (2) minimal spanning tree, and (3) multidimensional scaling of Borg&Groenen 2005. There are two claimed contributions: (1) Constructs a biology evolutionary tree, and (2) Gives insight into the representations produced by deep networks. Regarding (1), while the motivation of the work is grounded in biology, in practice the method is based only on visual similarity. The constructed trees thus can’t be expected to reflect the evolutionary hierarchy, and in fact there are no quantitative experiments that demonstrate that they do. Regarding (2), the technical depth of the exploration is not sufficient for ICLR. I’m not sure what we can conclude from the paper beyond the fact that CNNs are able to group categories together based on visual similarities, and deeper networks are able to do this better than more shallow networks (Fig 2). In summary, this paper is unfortunately not ready for publication at this time.
3: Clear rejection
3
-1
H1UqBl9mx
HJGODLqgx
Novel model for temporal data
This paper presents a novel model for unsupervised segmentation and classification of time series data. A recurrent hidden semi-markov model is proposed. This extends regular hidden semi-markov models to include a recurrent neural network (RNN) for observations. Each latent class has its own RNN for modeling observations for that category. Further, an efficient training procedure based on a variational approximation. Experiments demonstrate the effectiveness of the approach for modeling synthetic and real time series data. This is an interesting and novel paper. The proposed method is a well-motivated combination of duration modeling HMMs with state of the art observation models based on RNNs. The combination alleviates shortcomings of standard HSMM variants in terms of the simplicity of the emission probability. The method is technically sound and demonstrated to be effective. It would be interesting to see how this method compares quantitatively against CRF-based methods (e.g. Ammar, Dyer, and Smith NIPS 2014). CRFs can model more complex data likelihoods, though as noted in the response phase there are still limitations. Regardless, I think the merits of using RNNs for the class-specific generative models are clear.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
SkaIJWuEx
HJGODLqgx
Good method for HSMM estimation
This paper proposes a novel and interesting way to tackle the difficulties of performing inference atop HSMM. The idea of using an embedded bi-RNN to approximate the posterior is a reasonable and clever idea. That being said, I think two aspects may need further improvement: (1) An explanation as to why a bi-RNN can provide more accurate approximations than other modeling choices (e.g. structured mean field that uses a sequential model to formulate the variational distribution) is needed. I think it would make the paper stronger if the authors can explain in an intuitive way why this modeling choice is better than some other natural choices (in addition to empirical verification). (2) The real world datasets seem to be quite small (e.g. less than 100 sequences). Experimental results reported on larger datasets may also strengthen the paper.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
BJjnIbMVe
HJGODLqgx
Review
Putting the score for now, will post the full review tomorrow.
7: Good paper, accept
3: The reviewer is fairly confident that the evaluation is correct
7
3
H1R2GeGNg
rky3QW9le
Review
A new sparse coding model is introduced that learns features jointly with their transformations. It is found that inference over per-image transformation variables is hard, so the authors suggest tying these variables across all data points, turning them into global parameters, and using multiple transformations for each feature. Furthermore, it is suggested to use a tree of transformations, where each path down the tree generates a feature by multiplying the root feature by the transformations associated with the edges. The one-layer tree model achieves similar reconstruction error as traditional sparse coding, while using fewer parameters. This is a nice addition to the literature on sparse coding and the literature on learning transformation models. The authors identify and deal with a difficult inference problem that can occur in transformation models. That said, I am skeptical about the usefulness of the general approach. The authors take it as a given that “learning sparse features and transformations jointly” is an important goal in itself, but this is never really argued or demonstrated with experiments. It doesn’t seem like this method enables new applications, extends our understanding of learning what/where pathways in the brain, or improve our ability to model natural images. The authors claim that the model extracts pose information, but although the model explicitly captures the transformation that relates different features in a tree, at test time, inference is only performed over the (sparse) coefficient associated with each (feature, transformation) combination, just like in sparse coding. It is not clear what we gain by knowing that each coefficient is associated with a transformation, especially since there are many models that do this general “what / where” split. It would be good to check that the x_{v->b} actually change significantly from their initialization values. The loss surface still looks pretty bad even for tied transformations, so they may actually not move much. Does the proposed model work better according to some measure, compared to a model where x_{v->b} are fixed and chosen from some reasonable range of parameter values (either randomly or spaced evenly)? One of the conceptually interesting aspects of the paper is the idea of a tree of transformations, but the advantage of deeper trees is never demonstrated convincingly. It looks like the authors have only just gotten this approach to work on toy data with vertical and horizontal bars. Finally, it is not clear how the method could be extended to have multiple layers. The transformation operators T can be defined in the first layer because they act on the input space, but the same cannot be done in the learned feature space. It is also not clear how the pose information should be further processed in a hierarchical manner, or how learning in a deep version should work. In summary, I do not recommend this paper for publication, because it is not clear what problem is being solved, the method is only moderately novel and the novel aspects are not convincingly shown to be beneficial.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
S17BLJQVg
rky3QW9le
Review of "transformational sparse coding"
This paper proposes an approach to unsupervised learning based on a modification to sparse coding that allows for explicit modeling of transformations (such as shift, rotation, etc.), as opposed to simple pooling as is typically done in convnets. Results are shown for training on natural images, demonstrating that the algorithm learns about features and their transformations in the data. A comparison to traditional sparse coding shows that it represents images with fewer degrees of freedom. This seems like a good and interesting approach, but the work seems like its still in its early formative stages rather than a complete work with a compelling punch line. For example one of the motivations is that you'd like to represent pose along with the identity of an object. While this work seems well on its way to that goal, it doesn't quite get there - it leaves a lot of dots still to be connected. Also there are a number of things that aren't clear in the paper: o The central idea of the paper it seems is the use of a transformational sparse coding tree to make tractable the inference of the Lie group parameters x_k. But how exactly this is done is not at all clear. For example, the sentence: "The main idea is to gradually marginalize over an increasing range of transformations," is suggestive but not clear. This needs to be much better defined. What do you mean by marginalization in this context? o The connection between the Lie group operator and the tree leaves and weights w_b is not at all clear. The learning rule spells out the gradient for the Lie group operator, but how this is used to learn the leaves of the tree is not clear. A lot is left to the imagination here. This is especially confusing because although the Lie group operator is introduced earlier, it is then stated that its not tractable for inference because there are too many local minima, and this motivates the tree approach instead. So its not clear why you are learning the Lie group operator. o It is stated that "Averaging over many data points, smoothens the surface of the error function." I don't understand why you would average over many data points. It seems each would have its own transformation, no? o What data do you train on? How is it generated? Do you generate patches with known transformations and then show that you can recover them? Please explain. The results shown in Figure 4 look very interesting, but given the lack of clarity in the above, difficult to interpret and understand what this means, and its significance. I would encourage the authors to rewrite the paper more clearly and also to put more work into further developing these ideas, which seem very promising.
4: Ok but not good enough - rejection
4: The reviewer is confident but not absolutely certain that the evaluation is correct
4
4
B1QEUHL4e
rky3QW9le
review
This paper trains a generative model of image patches, where dictionary elements undergo gated linear transformations before being combined. The transformations are motivated in terms of Lie group operators, though in practice they are a set of fixed linear transformations. This is motivated strongly in terms of learning a hierarchy of transformations, though only one layer is used in the experiments (except for a toy case in the appendix). I like the motivation for this algorithm. The realization seems very similar to a group or block sparse coding implementation. I was disappointed by the restriction to linear transformations. The experiments were all toy cases, demonstrating that the algorithm can learn groups of Gabor- or center surround-like features. They would have been somewhat underpowered five years ago, and seemed extremely small by today's standards. Specific comments: Based on common practices in ML literature, I have a strong bias to think of $x$ as inputs and $w$ as network weights. Latent variables are often $z$ or $a$. Depending on your target audience, I would suggest permuting your choice of symbols so the reader can more quickly interpret your model. nit: number all equations for easier reference sec 2.2 -- It's weird that the transformation is fixed, but is still written as a function of x. sec 2.3 -- The updated text here confuses me actually. I had thought that you were using a fixed set of linear transformations, and were motivating in terms of Lie groups, but were not actually taking matrix exponentials in your algorithm. The equations in the second half of this section suggest you are working with matrix exponentials though. I'm not sure which direction I'm confused in, but probably good to clarify the text either way. BTW -- there's another possible solution to the local minima difficulty, which is the one used in Sohl-Dickstein, 2010. There, they introduce blurring operators matched to each transformation operator, and gradient descent can escape local minima by detouring through coarser (more blurred) scales. sec 3.2 -- I believe by degrees of freedom you mean the number of model parameters, not the number of latent coefficients that must be inferred? Should make this more clear. Is it more appropriate to compare reconstruction error while matching number of model parameters, or number of latent variables? I wonder if a convolutional version of this algorithm would be practical / would make it more suited as a generative model of whole images. ==== post rebuttal update Thank you for taking the time to write the rebuttal! I have read it, but it did not significantly effect my rating.
5: Marginally below acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
5
4
ByfsAgHVg
SJIMPr9eg
The contribution is incremental with no impressive comparison results
This paper proposes a boosting based ensemble procedure for residual networks by adopting the Deep Incremental Boosting method that was used for CNN's(Mosca & Magoulas, 2016a). At each step t, a new block of layers are added to the network at a position p_t and the weights of all layers are copied to the current network to speed up training. The method is not sufficiently novel since the steps of Deep Incremental Boosting are slightly adopted. Instead of adding a layer to the end of the network, this version adds a block of layers to a position p_t (starts at a selected position p_0) and merges layer accordingly hence slightly adopts DIB. The empirical analysis does not use any data-augmentation. It is not clear whether the improvements (if there is) of the ensemble disappear after data-augmentation. Also, one of the main baselines, DIB has no-skip connections therefore this can negatively affect the fair comparison. The authors argue that they did not involve state of art Res Nets since their analysis focuses on the ensemble approach, however any potential improvement of the ensemble can be compensated with an inherent feature of Res Net variant. The boosting procedure can be computationally restrictive in case of ImageNet training and Res Net variants may perform much better in that case too. Therefore the baselines should include the state of art Res Nets and Dense Convolutional networks hence current results are preliminary. In addition, it is not clear how sensitive the boosting to the selection of injection point. This paper adopts DIB to Res Nets and provides some empirical analysis however the contribution is not sufficiently novel and the empirical results are not satisfactory for demonstrating that the method is significant. Pros -provides some preliminary results for boosting of Res Nets Cons -not sufficiently novel: an incremental approach -empirical analysis is not satisfactory
3: Clear rejection
3
-1
SJY52vWVg
SJIMPr9eg
Interesting ideas, unconvincing execution, lack of comparisons to the literature
The paper under consideration proposes a set of procedures for incrementally expanding a residual network by adding layers via a boosting criterion. The main barrier to publication is the weak empirical validation. The tasks considered are quite small scale in 2016 (and MNIST with a convolutional net is basically an uninteresting test by this point). The paper doesn't compare to the literature, and CIFAR-10 results fail to improve upon rather simple, single-network published baselines (Springenberg et al, 2015 for example, obtains 92% without data augmentation) and I'm pretty sure there's a simple ResNet result somewhere that outshines these too. The CIFAR100 results are a little bit interesting as they are better than I'm used to seeing (I haven't done a recent literature crawl), and this is unsurprising -- you'd expect ensembles to do well when there's a dearth of labeled training data, and here there are only a few hundred per label. But then it's typical on both CIFAR10 and CIFAR100 to use simple data augmentation schemes which aren't employed here, and these and other forms of regularization are a simpler alternative to a complicated iterative augmentation scheme like this. It'd be easier to sell this method either as an option for scarce labeled datasets where data augmentation is non-trivial (but then for most image-related applications, random crops and reflections are easy and valid), but that would necessitate different benchmarks, and comparison against simpler methods like said data augmentation, dropout (especially, due to the ensemble interpretation), and so on.
3: Clear rejection
3
-1
rkqolxB4e
SJIMPr9eg
Lack of comparison
The authors mention that they are not aiming to have SOTA results. However, that an ensemble of resnets has lower performance than some of single network results, indicates that further experimentation preferably on larger datasets is necessary. The literature review could at least mention some existing works such as wide resnets https://arxiv.org/abs/1605.07146 or the ones that use knowledge distillation for ensemble of networks for comparison on cifar. While the manuscript is well-written and the idea is novel, it needs to be extended with experiments.
4: Ok but not good enough - rejection
4
-1
r10fdSBNe
HyWWpw5ex
The paper introduces a time dependent recommender system based on point processes parametrized by time dependent user and item latent representations. The later are modeled as coupled – autoregressive processes – i.e. the representation of a user/item changes when he interacts with an item/user, and is a function of both the user and the item representations before time t. This is called coevolution here and the autoregressive process is called recurrent NN. The model may also incorporate heterogeneous inputs. Experiments are performed on several datasets, and the model is compared with different baselines. There are several contributions in the paper: 1) modeling recommendation via parametrized point processes where the parameter dynamics are modeled by latent user/item representations, 2) an optimization algorithm for maximizing the likelihood of this process, with different technical tricks that seem to break its intrinsic complexity, 3) evaluation experiments for time dependent recommendation. The paper by the same authors (NIPS 2016) describes a similar model of continuous time coevolution, and a similar evaluation. The difference lies in the details of the model: the point process model is not the same and of the latent factor dynamic model is slightly different, but the modeling approach and the arguments are exactly the same. By the end, one does not know what makes this model perform better than the one proposed in NIPS, is it the choice for the process, the new parametrization? Both are quite similar. There is no justification on the choice of the specific form of the point process in the two papers. Did the authors tried other forms as well? The same remark applies for the form of the dynamical process: the non-linearity used for the modeling of the latent user/item vectors here is limited to a sigmoid function, which probably does not change much w.r.t. a linear model, but there is no evidence of the role of this non linearity in the paper. Note that there are some inconsistencies between the results in the two papers. Concerning the evaluation, the authors introduce two criteria. I did not get exactly how they evaluate the item recommendation: it is mentioned that at each time t, the model predicts the item the user will interact with. Do you mean, the next item the user will interact with after time t? For the time prediction, why is it a relevant metric for recommendation? A comparison of the complexity, or execution time of the different methods would be helpful. The complexity of your method is apparently proportional to #items*#users, what are the complexity limits of your methods. Overall, the paper is quite nice and looks technically sound, albeit many details are missing. On the other hand, I have a mixed feeling because of the similarity with NIPS paper. The authors should have make a better work at convincing us that this is not a marginal extension of previous work by the authors. I was not convinced either by the evaluation criteria and there is no evidence that the model can be used for large datasets.
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
r1IyyRbNg
HyWWpw5ex
review for Recurrent Coevolutionary Feature Embedding Processes for Recommendation
This paper proposes a method to model time changing dynamics in collaborative filtering. Comments: 1) The main idea of the paper is build upon similar to a previous work by the same group of author (Wang et.al KDD), the major difference appears to be change some of the latent factors to be RNN 2) The author describes a BPTT technique to train the model 3) The author introduced time prediction as a new metric to evaluate the effectiveness of time dependent model. However, this need to be condition on a given user-item pair. 4) It would be interesting to consider other metrics, for example - The switching time where a user changes his/her to another item - Jointly predict the next item and switching time. In summary, this is a paper that improves over an existing work on time dynamics model in recommender system. The time prediction metric is interesting and opens up interesting discussion on how we should evaluate recommender systems when time is involved (see also comments).
6: Marginally above acceptance threshold
4: The reviewer is confident but not absolutely certain that the evaluation is correct
6
4
HJbEIfxEg
HyWWpw5ex
review
The paper seeks to predict user events (interactions with items at a particular point in time). Roughly speaking the contributions are as follows: (a) the paper models the co-evolutionary process of users' preferences toward items (b) the paper is able to incorporate external sources of information, such as user and item features (c) the process proposed is generative, so is able to estimate specific time-points at which events occur (d) the model is able to account for non-linearities in the above Following the pre-review questions, I understand that it is the combination of (a) and (c) that is the most novel aspect of the paper. A fully generative process which can be sampled is certainly nice (though of course, non-generative processes like regular old regression can estimate specific time points and such too, so not sure in practice how relevant this distinction is). Other than that the above parts have all appeared in some combination in previous work, though the combination of parts here certainly passes the novelty bar. I hadn't quite followed the issue mentioned in the pre-review discussion that the model requires multiple interactions per userXitem pair in order to fit the model (e.g. a user interacts with the same business multiple times). This is a slightly unusual setting compared to most temporal recommender systems work. I question to some extent whether this problem setting isn't a bit restrictive. That being said I take the point about why the authors had to subsample the Yelp data, but keeping only users with "hundreds" of events means that you're left with a very biased sample of the user base. Other than the above issues, the paper is technically nice, and the experiments include strong baselines and reports good performance.
6: Marginally above acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
6
3
B1wgPCMVx
BJrFC6ceg
Great empirical work, insightful ablation experiments, code is available which is a nice contribution to the community
# Review This paper proposes five modifications to improve PixelCNN, a generative model with tractable likelihood. The authors empirically showed the impact of each of their proposed modifications using a series of ablation experiments. They also reported a new state-of-the-art result on CIFAR-10. Improving generative models, especially for images, is an active research area and this paper definitely contributes to it. # Pros The authors motivate each modification well they proposed. They also used ablation experiments to show each of them is important. The authors use a discretized mixture of logistic distributions to model the conditional distribution of a sub-pixel instead of a 256-way softmax. This allows to have a lower output dimension and to be better suited at learning ordinal relationships between sub-pixel values. The authors also mentioned it speeded up training time (less computation) as well as the convergence during the optimization of the model (as shown in Fig.6). The authors make an interesting remark about how the dependencies between the color channels of a pixel are likely to be relatively simple and do not require a deep network to model. This allows them to have a simplified architecture where you don't have to separate out all feature maps in 3 groups depending on whether or not they can see the R/G/B sub-pixel of the current location. # Cons It is not clear to me what the predictive distribution for the green channel (and the blue) looks like. More precisely, how are the means of the mixture components linearly depending on the value of the red sub-pixel? I would have liked to see the equations for them. # Minor Comments In Fig.2 it is written "Sequence of 6 layers" but in the text (Section 2.4) it says 6 blocks of 5 ResNet layers. What is the remaining layer? In Fig.2 what does the first "green square -> blue square" which isn't in the white rectangle represents? Is there any reason why the mixture indicator is shared across all three channels?
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
Bkc_sOZ4l
BJrFC6ceg
Related work
Summary: This paper on autoregressive generative models explores various extensions of PixelCNNs. The proposed changes are to replace the softmax function with a logistic mixture model, to use dropout for regularization, to use downsampling to increase receptive field size, and the introduction of particular skip connections. The authors find that this allows the PixelCNN to outperform a PixelRNN on CIFAR-10, the previous state-of-the-art model. The authors further explore the performance of PixelCNNs with smaller receptive field sizes. Review: This is a useful contribution towards better tractable image models. In particular, autoregressive models can be quite slow at test time, and the more efficient architectures described here should help with that. My main criticism regards the severe neglect of related work. Mixture models have been used a lot in autoregressive image modeling, including for multivariate conditional densities and including downsampling to increase receptive field size, albeit in a different manner: Domke (2008), Hosseini et al. (2010), Theis et al. (2012), Uria et al. (2013), Theis et al. (2015). Note that the logistic distribution is a special case of the Gaussian scale mixture (West, 1978). The main difference seems to be the integration of the density to model integers. While this is clearly a good idea and the right way forward, the authors claim but do not support that not doing this has “proved to be a problem for earlier models based on continuous distributions”. Please elaborate, add a reference, or ideally report the performance achieved by PixelCNN++ without integration (and instead adding uniform noise to make the variables continuous). 60,000 images are not a lot in a high-dimensional space. While I can see the usefulness of regularization for specialized content – and this can serve as a good example to demonstrate the usefulness of dropout – why not use “80 million tiny images” (superset of CIFAR-10) for natural images? Semi-supervised learning should be fairly trivial here (because the model’s likelihood is tractable), so this data could even be used in the class-conditional case. It would be interesting to know how fast the different models are at test time (i.e., when generating images).
7: Good paper, accept
7
-1
B1DF5VFEg
BJrFC6ceg
Review
Apologies for the late submission of this review, and thank you for the author’s responses to earlier questions. This submission proposes an improved implementation of the PixelCNN generative model. Most of the improvements are small and can be considered as specific technical details such as the use of dropout and skip connections, while others are slightly more substantial such as the use of a different likelihood model and multiscale analysis. The submission demonstrates state-of-the-art likelihood results on CIFAR-10. My summary of the main contribution: Autoregressive-type models - of which PixelCNN is an example - are a nice class of models as their likelihood can be evaluated in closed form. A main differentiator for this type of models is how the conditional likelihood of one pixel conditioned on its causal neighbourhood is modelled: - In one line of work such as (Theis et al, 2012 MCGSM, Theis et al 2015 Spatial LSTM) the conditional distribution is modelled as a continuous density over real numbers. This approach has limitations: We know that in observed data pixel intensities are quantized to a discrete integer representation so a discrete distribution could give better likelihoods. Furthermore these continuous distributions have a tail and assign some probability mass outside the valid range of pixel intensities, which may hurt the likelihood. - In more recent work by van den Oord and colleagues the conditional likelihood is modelled as an arbitrary discrete distribution over the 256 possible values for pixel intensities. This does not suffer from the limitations of continuous likelihoods, but it also seems wasteful and is not very data efficient. The authors propose something in the middle by keeping the discretized nature of the conditional likelihood, but restricting the discrete distribution to ones whose CDF that can be modeled as a linear combination of sigmoids. This approach makes sense to me, and is new in a way, but it doesn’t appear to be very revolutionary or significant to me. The second somewhat significant modification is the use of downsampling and multiscale modelling (as opposed to dilated convolutions). The main motivation for the authors to do this is saving computation time while keeping the multiscale flexibility of the model. The authors also introduce shortcut connections to compensate for the potential loss of information as they perform downsampling. Again, I feel that this modification not particularly revolutionary. Multiscale image analysis with autoregressive generative models has been done for example in (Theis et al, 2012) and several other papers. Overall I felt that this submission falls short on presenting substantially new ideas, and reads more like documentation for a particular implementation of an existing idea.
6: Marginally above acceptance threshold
3: The reviewer is fairly confident that the evaluation is correct
6
3
ByL3yfYrx
rJqFGTslg
Good idea, well thought through and decently tested
The idea of "pruning where it matters" is great. The authors do a very good job of thinking it through, and taking to the next level by studying pruning across different layers too. Extra points for clarity of the description and good pictures. Even more extra points for actually specifying what spaces are which layers are mapping into which (\mathbb symbol - two thumbs up!). The experiments are well done and the results are encouraging. Of course, more experiments would be even nicer, but is it ever not the case? My question/issue - is the proposed pruning criterion proposed? Yes, pruning on the filter level is what in my opinion is the way to go, but I would be curious how the "min sum of weights" criterion compares to other approaches. How does it compare to other pruning criteria? Is it better than "pruning at random"? Overall, I liked the paper.
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
B1xzDtrVg
rJqFGTslg
Review
This paper proposes a simple method for pruning filters in two types of architecture to decrease the time for execution. Pros: - Impressively retains accuracy on popular models on ImageNet and Cifar10 Cons: - There is no justification for for low L1 or L2 norm being a good selection criteria. There are two easy critical missing baselines of 1) randomly pruning filters, 2) pruning filters with low activation pattern norms on training set. - There is no direct comparison to the multitude of other pruning and speedup methods. - While FLOPs are reported, it is not clear what empirical speedup this method gives, which is what people interested in these methods care about. Wall-clock speedup is trivial to report, so the lack of wall-clock speedup is suspect.
6: Marginally above acceptance threshold
6
-1
HJu2_ZGNe
rJqFGTslg
Simple idea with good experiments; transfer learning results would improve it
This paper proposes a very simple idea (prune low-weight filters from ConvNets) in order to reduce FLOPs and memory consumption. The proposed method is experimented on with VGG-16 and ResNets on CIFAR10 and ImageNet. Pros: - Creates *structured* sparsity, which automatically improves performance without changing the underlying convolution implementation - Very simple to implement Cons: - No evaluation of how pruning impacts transfer learning I'm generally positive about this work. While the main idea is almost trivial, I am not aware of any other papers that propose exactly the same idea and show a good set of experimental results. Therefore I'm inclined to accept it. The only major downside is that the paper does not evaluate the impact of filter pruning on transfer learning. For example, there is not much interest in the tasks of CIFAR10 or even ImageNet. Instead, the main interest in both academia and industry is the value of the learned representation for transferring to other tasks. One might expect filter pruning (or any other kind of pruning) to harm transfer learning. It's possible that the while the main task has about the same performance, transfer learning is strongly hurt. This paper has missed an opportunity to explore that direction. Nit: Fig 2 title says VGG-16 in (b) and VGG_BN in (c). Are these the same models?
7: Good paper, accept
4: The reviewer is confident but not absolutely certain that the evaluation is correct
7
4
HyAUArZSl
rJqFGTslg
Pruning Filters for Efficient ConvNets
This paper prunes entire groups of filters in CNN so that they reduce computational cost and at the same time do not result in sparse connectivity. This result is important to speed up and compress neural networks while being able to use standard fully-connected linear algebra routines. The results are a 10% improvements in ResNet-like and ImageNet, which may be also achieved with better design of networks. New networks should have been also compared, but this we know it is time-consuming. A good paper with some useful results.
7: Good paper, accept
7
-1
Hy5_Hg5Vl
H1GEvHcee
The authors proposed to use leaky rectified linear units replacing binary units in Gaussian RBM. A sampling method was presented to train the leaky-ReLU RBM. In the experimental section, AIS estimated likelihood on Cifar10 and SVHN were reported. It's interesting for trying different nonlinear hidden units for RBM. However, there are some concerns for the current work. 1. The author did not explain why the proposed sampling method (Alg. 2) is correct. And the additional computation cost (the inner loop and the projection) should be discussed. 2. The results (both the resulting likelihood and the generative samples) of Gaussian RBM are much worse than what we have experienced. It seems that the Gaussian RBM were not trained properly. 3. The representation learned from a good generative model often helps the classification task when there are fewer label samples. Gaussian RBM works well for texture synthesis tasks in which mixing is an important issue. The authors are encouraged to do more experiments in these two direction.
5: Marginally below acceptance threshold
5
-1