forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
msGKsXQXNiCBk
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
[ "Danqi Chen", "Richard Socher", "Christopher Manning", "Andrew Y. Ng" ]
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.
[ "new facts", "knowledge bases", "neural tensor networks", "semantic word vectors", "relations", "entities", "model", "database", "bases", "applications" ]
https://openreview.net/pdf?id=msGKsXQXNiCBk
https://openreview.net/forum?id=msGKsXQXNiCBk
OgesTW8qZ5TWn
review
1,363,419,120,000
msGKsXQXNiCBk
[ "everyone" ]
[ "Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their comments and agree with most of them. - We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). Experimental results show that our model also outperforms this model in terms of ranking & classification. - We didn't report the results on the original data because of the issues of overlap between training and testing set. 80.23% of the examples in the testing set appear exactly in the training set. 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model. - The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. - To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. - The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.
msGKsXQXNiCBk
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
[ "Danqi Chen", "Richard Socher", "Christopher Manning", "Andrew Y. Ng" ]
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.
[ "new facts", "knowledge bases", "neural tensor networks", "semantic word vectors", "relations", "entities", "model", "database", "bases", "applications" ]
https://openreview.net/pdf?id=msGKsXQXNiCBk
https://openreview.net/forum?id=msGKsXQXNiCBk
PnfD3BSBKbnZh
review
1,362,079,260,000
msGKsXQXNiCBk
[ "everyone" ]
[ "anonymous reviewer 75b8" ]
ICLR.cc/2013/conference
2013
title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors review: - A brief summary of the paper's contributions, in the context of prior work. This paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relationship type. The energy function is based on a so-called Neural Tensor Network, which essentially introduces a bilinear term in the computation of the hidden layer input activations of a single hidden layer neural network. A favorable comparison with the energy-function proposed in Bordes et al. 2011 is presented. - An assessment of novelty and quality. This work follows fairly closely the work of Border et al. 2011, with the main difference being the choice of the energy/scoring function. This is an advantage in terms of the interpretability of the results: this paper clearly demonstrates that the proposed energy function is better, since everything else (the training objective, the evaluation procedure) is the same. This is however a disadvantage in terms of novelty as this makes this work somewhat incremental. Bordes et al. 2011 also proposed an improved version of their model, using kernel density estimation, which is not used here. However, I suppose that the proposed model in this paper could also be similarly improved. More importantly, Bordes and collaborators have more recently looked at another type of energy function, in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012), which also involves bilinear terms and is thus similar (but not the same) as the proposed energy function here. In fact, the Bordes et al. 2012 energy function seems to outperform the 2011 one (without KDE), hence I would argue that the former would have been a better baseline for comparisons. - A list of pros and cons (reasons to accept/reject). Pros: Clear demonstration of the superiority of the proposed energy function over that of Bordes et al. 2011. Cons: No comparison with the more recent energy function of Bordes et al. 2012, which has some similarities to the proposed Neural Tensor Networks. Since this was submitted to the workshop track, I would be inclined to have this paper accepted still. This is clearly work in progress (the submitted paper is only 4 pages long), and I think this line of work should be encouraged. However, I would suggest the authors also perform a comparison with the scoring function of Bordes et al. 2012 in future work, using their current protocol (which is nicely setup so as to thoroughly compare energy functions).
msGKsXQXNiCBk
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
[ "Danqi Chen", "Richard Socher", "Christopher Manning", "Andrew Y. Ng" ]
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.
[ "new facts", "knowledge bases", "neural tensor networks", "semantic word vectors", "relations", "entities", "model", "database", "bases", "applications" ]
https://openreview.net/pdf?id=msGKsXQXNiCBk
https://openreview.net/forum?id=msGKsXQXNiCBk
yA-tyFEFr2A5u
review
1,362,246,000,000
msGKsXQXNiCBk
[ "everyone" ]
[ "anonymous reviewer 7e51" ]
ICLR.cc/2013/conference
2013
title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors review: This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-based scoring function, which is trained to assign high score to plausible relations. Evaluation is performed on Wordnet. The main differences w.r.t. (Bordes et al., AAAI11) is the scoring function, which now involves a tensor product to encode for the relation type and the use of a non-linearity. It would be interesting if the authors could comment the motivations of their architecture. For instance, what does the tanh could model here? The experiments raise some questions: - why do not also report the results on the original data set of (Bordes et al., AAAI11)? Even, is the data set contains duplicates, this stills makes a reference point. - the classification task is hard to motivate. Link prediction is a problem of detection: very few positive to find in huge set of negative examples. Transform that into a balanced classification problem is a non-sense to me. There have been several follow-up works to (Bordes et al., AAAI11) such as (Bordes et al., AISTATS12) or (Jenatton et al., NIPS12), that should be cited and discussed (some of those involve tensor for coding the relation type as well). Besides, they would also make the experimental comparison stronger. It should be explained how the pre-trained word vectors trained by the model of Collobert & Weston are use in the model. Wordnet entities are senses and not words and, of course, there is no direct mapping from words to senses. Which heuristic has been used? Pros: - better experimental results Cons: - skinny experimental section - lack of recent references
msGKsXQXNiCBk
Learning New Facts From Knowledge Bases With Neural Tensor Networks and Semantic Word Vectors
[ "Danqi Chen", "Richard Socher", "Christopher Manning", "Andrew Y. Ng" ]
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%.
[ "new facts", "knowledge bases", "neural tensor networks", "semantic word vectors", "relations", "entities", "model", "database", "bases", "applications" ]
https://openreview.net/pdf?id=msGKsXQXNiCBk
https://openreview.net/forum?id=msGKsXQXNiCBk
7jyp7wrwSzagb
review
1,363,419,120,000
msGKsXQXNiCBk
[ "everyone" ]
[ "Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their comments and agree with most of them. - We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012). Experimental results show that our model also outperforms this model in terms of ranking & classification. - We didn't report the results on the original data because of the issues of overlap between training and testing set. 80.23% of the examples in the testing set appear exactly in the training set. 99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric. Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model. - The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking. - To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper. - The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'.
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
rGZJRE7IJwrK3
review
1,392,852,360,000
IpmfpAGoH2KbX
[ "everyone" ]
[ "Charles Martin" ]
ICLR.cc/2013/conference
2013
review: It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in E. J. Candès, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40. where it was noted that the multi-scale basis suggested in this convex optimization approach is equivalent to the Wilson basis from his original work on RG theory in the 1970s
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
4Uh8Uuvz86SFd
comment
1,363,212,060,000
7to37S6Q3_7Qe
[ "everyone" ]
[ "Cédric Bény" ]
ICLR.cc/2013/conference
2013
reply: I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4. In order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a nearly self-contained explanation of what the model is for a specific one-dimensional example. The content of section 2 essentially explains what features must be kept for any generalization, and section 3 clarifies why these features are important. Concerning the second issue, I agree that this work is preliminary, and implementation is the next step.
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
7to37S6Q3_7Qe
review
1,362,321,600,000
IpmfpAGoH2KbX
[ "everyone" ]
[ "anonymous reviewer 441c" ]
ICLR.cc/2013/conference
2013
title: review of Deep learning and the renormalization group review: The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerical experiments showing that it works. What it needs is something like: 1) Define the DMRG (or whatever verion of RG you need) and Define the machine learning model. Do these with explicit formulas so reader can know what exactly they are. Things like 'Instead, we only allow for maps πj which are local in two important ways: firstly, each input vertex can only causally influence the values associated with the m output vertices that it represents plus all kth degree neighbors of these, where k would typically be small' are very hard to follow. 2) Show the mapping between the two models. 3) Show what it does on real data and that it does something interesting and/or useful. (Real data e.g. sound signals, images, text,...)
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
tb0cgaJXQfgX6
review
1,363,477,320,000
IpmfpAGoH2KbX
[ "everyone" ]
[ "Aaron Courville" ]
ICLR.cc/2013/conference
2013
review: Reviewer 441c, Have you taken a look at the new version of the paper? Does it go some way to addressing your concerns?
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
7Kq-KFuY-y7S_
review
1,365,121,080,000
IpmfpAGoH2KbX
[ "everyone" ]
[ "Yann LeCun" ]
ICLR.cc/2013/conference
2013
review: It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods. There is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagation algorithm for inference minimizes the Bethe free energy (an approximation of the free energy in which only pairwise interactions are taken into account and high-order interactions are ignored). More generally, variational methods inspired by statistical physics have been a very popular topic in graphical model inference. The renormalization methods could be relevant to deep architectures in the sense that the grouping of random variable resulting from a change of scale could be be made analogous with the pooling and subsampling operations often used in deep models. It's an interesting idea, but it will probably take more work (and more tutorial expositions of RG) to catch the attention of this community.
IpmfpAGoH2KbX
Deep learning and the renormalization group
[ "Cédric Bény" ]
Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling.
[ "algorithm", "deep learning", "way", "effective behavior", "system", "scale", "key" ]
https://openreview.net/pdf?id=IpmfpAGoH2KbX
https://openreview.net/forum?id=IpmfpAGoH2KbX
Qj1vSox-vpQ-U
review
1,362,219,360,000
IpmfpAGoH2KbX
[ "everyone" ]
[ "anonymous reviewer acf4" ]
ICLR.cc/2013/conference
2013
title: review of Deep learning and the renormalization group review: This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the paper. The paper contains a non-technical and insightful exposition of concepts and discusses a learning algorithm for stochastic networks based on the `multiscale entanglement renormalization ansatz' (MERA). This contribution will potentially evoke the interest of many readers.
SqNvxV9FQoSk2
Switched linear encoding with rectified linear autoencoders
[ "Leif Johnson", "Craig Corcoran" ]
Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions.
[ "linear", "models", "rectified linear autoencoders", "machine learning", "formal connections", "autoencoders", "neural network models", "inputs", "sparse coding" ]
https://openreview.net/pdf?id=SqNvxV9FQoSk2
https://openreview.net/forum?id=SqNvxV9FQoSk2
ff2dqJ6VEpR8u
review
1,362,252,900,000
SqNvxV9FQoSk2
[ "everyone" ]
[ "anonymous reviewer 5a78" ]
ICLR.cc/2013/conference
2013
title: review of Switched linear encoding with rectified linear autoencoders review: In the deep learning community there has been a recent trend in moving away from the traditional sigmoid/tanh activation function to inject non-linearity into the model. One activation function that has been shown to work well in a number of cases is called Rectified Linear Unit (ReLU). Building on the prior research, this paper aims to provide an analysis of what is going on while training networks using these activation functions, and why do they work. In particular the authors provide their analysis from the context of training a linear auto-encoder with rectified linear units on a whitened data. They use a toy dataset in 3 dimensions (gaussian and mixture of gaussian) to conduct the analysis. They loosely test the hypothesis obtained from the toy datasets on the MNIST data. Though the paper starts with a lot of promise, unfortunately it fails to deliver on what was promised. There is nothing in the paper (no new idea or insight) that is either not already known, or fairly straightforward to see in the case of linear auto-encoders trained using a rectified linear thresholding unit. Furthermore there are a number of flaws in the paper. For instance, the analysis of section 3.1 seems to be a bit mis-leading. By definition if one fixes the weight vector w to [1,0] there is no way that the sigmoid can distinguish between x's which are greater than S for some S. However with the weight vector taking arbitrary continuous values, that may not be the case. Besides, the purpose of the encoder is to learn a representation, which can best represent the input, and coupled with the decoder can reconstruct it. The encoder learning an identity function (as is argued in the paper) is not of much use. Finally, the whole analysis of section 3 was based on a linear auto-encoder, whose encoder-decoder weights were tied. However in the case of MNIST the authors show the filters learnt from an untied weight auto-encoder. There seems to be some disconnect there. In short the paper does not offer any novel insight or idea with respect to learning representation using auto-encoders with rectified linear thresholding function. Various gaps in the analysis also makes it a not very high quality work.
SqNvxV9FQoSk2
Switched linear encoding with rectified linear autoencoders
[ "Leif Johnson", "Craig Corcoran" ]
Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions.
[ "linear", "models", "rectified linear autoencoders", "machine learning", "formal connections", "autoencoders", "neural network models", "inputs", "sparse coding" ]
https://openreview.net/pdf?id=SqNvxV9FQoSk2
https://openreview.net/forum?id=SqNvxV9FQoSk2
kH1XHWcuGjDuU
review
1,361,946,600,000
SqNvxV9FQoSk2
[ "everyone" ]
[ "anonymous reviewer 9c3f" ]
ICLR.cc/2013/conference
2013
title: review of Switched linear encoding with rectified linear autoencoders review: This paper analyzes properties of rectified linear autoencoder networks. In particular, the paper shows that rectified linear networks are similar to linear networks (ICA). The major difference is the nolinearity ('switching') that allows the decoder to select a subset of features. Such selection can be viewed as a mixture of ICA models. The paper visualizes the hyperplanes learned for a 3D dataset and shows that the results are sensible (i.e., the learned hyperplanes capture the components that allow the reconstruction of the data). Some comments: - On the positive side, I think that the paper makes a interesting attempt to understand properties of nonlinear networks, which is typically hard because of the nonlinearities. The choice of the activation function (rectified linear) makes such analysis possible. - I understand that the paper is mainly an analysis paper. But I feel that it seems to miss a strong key thesis. It would be more interesting that the analysis reveals surprising/unexpected results. - The analyses do not seem particularly deep nor surprising. And I do not find that they can advance our field in some way. I wonder if it's possible to make the analysis more constructive so that we can improve our algorithms. Or at least the analyses can reveal certain surprising properties of unsupervised algorithms. - It's unclear the motivation behind the use of rectified linear activation function for analysis. - The paper touches a little bit on whitening. I find the section on this topic is unsatisfying. It would be good to analyse the role of whitening in greater details here too (as claimed by abstract and introduction). - The experiments show that it's possible to learn penstrokes and Gabor filters from natural images. But I think this is no longer novel. And that there are very few practical implications of this work.
SqNvxV9FQoSk2
Switched linear encoding with rectified linear autoencoders
[ "Leif Johnson", "Craig Corcoran" ]
Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions.
[ "linear", "models", "rectified linear autoencoders", "machine learning", "formal connections", "autoencoders", "neural network models", "inputs", "sparse coding" ]
https://openreview.net/pdf?id=SqNvxV9FQoSk2
https://openreview.net/forum?id=SqNvxV9FQoSk2
oozAQe0eAnQ1w
review
1,362,360,840,000
SqNvxV9FQoSk2
[ "everyone" ]
[ "anonymous reviewer ab3b" ]
ICLR.cc/2013/conference
2013
title: review of Switched linear encoding with rectified linear autoencoders review: The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-autoencoder-like ICA learning criterion of Le et al (NIPS 2011). The first 3 have in common that, for each example, they yield a subset of non-zero (active) hidden units, that result from a simple thresholding. And it is argued that the training objective thus restricted to that subset corresponds to that of Le et al's ICA. Many 2D and 3D graphics with Gaussian data try to convey a geometric intuition of what is going on. I find rather obvious that these methods switch on a different linear basis for each example. The specific conection highlighted with Le et al's ICA work is more interesting, but it only applies if L1 feature sparsity regularization is employed in addition to the rectified linear activation function. At the present stage, my impression is that this paper mainly reflect on the authors' maturing perception of links between the various methods, together with their building of an intuitive geometric understanding of how they work. But it is not yet ripe and its take home message not clear. While its reflections are not without basis or potential interest they are not currently sufficiently formally exposed and read like a set of loosely bundled observations. I think the paper could greatly benefit from a more streamlined central thesis and message with supporting arguments. The main empirical finding from the small experiments in this paper seems to be that the training criterion tends to yield pairs of opposed (negated) feature vectors. What we should conclude from this is however unclear. The graphics are too many. Several seem redundant and are not particularly enlightening for our understanding. Also the use of many Gaussian data examples seems a poor choice to highlight or analyse the switching behavior of these 'switched linear coding' techniques (what does switching buy us if a PCA can capture about all there is about the structure?).
DD2gbWiOgJDmY
Why Size Matters: Feature Coding as Nystrom Sampling
[ "Oriol Vinyals", "Yangqing Jia", "Trevor Darrell" ]
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
[ "nystrom", "data points", "size matters", "feature", "approximation", "bounds", "function", "dictionary size", "computer vision", "machine learning community" ]
https://openreview.net/pdf?id=DD2gbWiOgJDmY
https://openreview.net/forum?id=DD2gbWiOgJDmY
EW9REhyYQcESw
review
1,362,202,140,000
DD2gbWiOgJDmY
[ "everyone" ]
[ "anonymous reviewer 1024" ]
ICLR.cc/2013/conference
2013
title: review of Why Size Matters: Feature Coding as Nystrom Sampling review: The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature where accuracy increases with dictionary size but also saturates. Pros: - Demonstrates limitations of shallow models and analytically justifies the use of deeper models.
DD2gbWiOgJDmY
Why Size Matters: Feature Coding as Nystrom Sampling
[ "Oriol Vinyals", "Yangqing Jia", "Trevor Darrell" ]
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
[ "nystrom", "data points", "size matters", "feature", "approximation", "bounds", "function", "dictionary size", "computer vision", "machine learning community" ]
https://openreview.net/pdf?id=DD2gbWiOgJDmY
https://openreview.net/forum?id=DD2gbWiOgJDmY
oxSZoe2BGRoB6
review
1,362,196,320,000
DD2gbWiOgJDmY
[ "everyone" ]
[ "anonymous reviewer 998c" ]
ICLR.cc/2013/conference
2013
title: review of Why Size Matters: Feature Coding as Nystrom Sampling review: This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space defined by an off-the-shelf feature extraction system. In such a system, a bound is identified for the error in representing the 'full' dictionary composed of all data points by a Nystrom approximated version (i.e., represented by subsampling the data points randomly). The bound is then extended to show that the approximate kernel matrix obtained using the Nystrom-sampled dictionary is close to the true kernel matrix, and it is argued that the quality of the approximation is a reasonable proxy for the classification error we can expect after training. It is shown that this approximation model qualitatively predicts the monotonic rise in accuracy of feature extraction with larger dictionaries and saturation of performance in experiments. This is a short paper, but the main idea and analysis are interesting. It is nice to have some theoretical machinery to talk about the empirical finding of rising, saturating performance. In some places I think more detail could have been useful. One undiscussed point is the fact that many dictionary-learning methods do more than populate the dictionary with exemplars so it's possible that a 'learning' method might do substantially better (perhaps reaching top performance much sooner). This doesn't appear to be terribly important in low-dimensional spaces where sampling strategies work about as well as learning, but could be critical for high-dimensional spaces (where sampling might asymptote much more slowly than learning). It seems worth explaining the limitations of this analysis and how it relates to learning. A few other questions / comments: The calibration of constants for the bound in the experiments was not clear to me. How is the mapping from the bound (Eq. 2) to classification accuracy actually done? The empirical validation of the lower bound relies on a calibration procedure that, as I understand it, effectively ends up rescaling a fixed-shape curve to fit observed trend in accuracy on the real problem. As a result, it seems like we could come up with a 'nonsense' bound that happened to have such a shape and then make a similar empirical claim. Is there a way to extend the analysis to rule this out? Or perhaps I misunderstand the origin of the shape of this curve. Pros: (1) A novel view of feature extraction that appears to yield a reasonable explanation for the widely observed performance curves of these methods is presented. I don't know how much profit this view might yield, but perhaps that will be made clear by the 'overshooting' method foreshadowed in the conclusion. (2) A pleasingly short read adequate to cover the main idea. (Though a few more details might be nice.) Cons: (1) How this bound relates to the more common case of 'trained' dictionaries is unclear. (2) The empirical validation shows the basic relationship qualitatively, but it is possible that this does not adequately validate the theoretical ideas and their connection to the observed phenomenon.
DD2gbWiOgJDmY
Why Size Matters: Feature Coding as Nystrom Sampling
[ "Oriol Vinyals", "Yangqing Jia", "Trevor Darrell" ]
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
[ "nystrom", "data points", "size matters", "feature", "approximation", "bounds", "function", "dictionary size", "computer vision", "machine learning community" ]
https://openreview.net/pdf?id=DD2gbWiOgJDmY
https://openreview.net/forum?id=DD2gbWiOgJDmY
8sJwMe5ZwE8uz
review
1,363,264,440,000
DD2gbWiOgJDmY
[ "everyone" ]
[ "Oriol Vinyals, Yangqing Jia, Trevor Darrell" ]
ICLR.cc/2013/conference
2013
review: We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve performance in absolute terms, but that is an orthogonal issue to our main results. Nonetheless, we think this is a valuable observation, and will include a discussion of these points in the final version of this paper. The relationship between a kernel error bound and classification accuracy is discussed in more detail in [Cortes et al. On the Impact of Kernel Approximation on Learning Accuracy. AISTATS 2010]. The main result is that the bounds are proportional, verifying our empirical claims. We will add this reference to the paper. Regarding the comment on fitting the shape of the curve, we are only using the first two points to fit the 'constants' given in the bound, so the fact that it extrapolates well in many tasks gives us confidence that the bound is accurate.
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
RzSh7m1KhlzKg
review
1,363,574,460,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "Hugo Van hamme" ]
ICLR.cc/2013/conference
2013
review: I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns: A common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I compared the performance of the diagonalized Newton algorithm (DNA) with the wide-spread multiplicative updates (MU) exactly because it is the most common baseline and almost every algorithm has been compared against it. As you suggested, I did run comparison tests and I will present the results here. I need to find a method to post some figures to make the point clear. First, I compared against the Cyclic Coordinate Descent (CCD) by Hsieh & Dhillon using the software they provide on their website. I ran the synthetic 1000x500 example (rank 10). The KLD as a function of iteration number for DNA and CCD are very close (I did not find a way to post a plot on this forum). However, in terms of CPU (ran on the machine I mention in the paper) DNA is a lot faster with about 200ms per iteration for CCD and about 50ms for DNA. Note that CCD is completely implemented in C++ (embedded in a mex-file) while DNA is implemented in matlab (with one routine in mex - see the download page mentioned in the paper). As for the comparison with SBCD (scalar block coordinate descent), I also ran their code on the same example, but unfortunately, one of the matrix factors is projected to an all-zero matrix in the first iteration. I have not found the cause yet. What definitely needs investigation is that I observe CCD to be 4 times slower than DNA. Using my implementation for MU, 1200 MU iterations are actually as fast as the 100 CCD iteration. (My matlab MU implementation is 10 times faster than the one provided by Hsieh&Dhillon). For these reasons, I am not too keen on quickly including a comparison in terms of CPU time (which is really the bottom line), as implementation issues seem not so trivial. Even more so for a comparison on a GPU, where the picture could be different from the CPU for the cyclic updates in CCD. A thorough comparison on these two architectures seems like a substantial amount of future work. But I hope the data above data convince you the present paper and public code are significant work. Reply to Anonymous 57f3 ' it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)' >> There are practical problems where NMF takes hours, e.g. the problems of [6], which is essentially learning a speech recognizer model from data. We are now applying NMF-based speech recognition in learning paradigms that learn from user interaction examples. In such cases, you want to wait seconds, not minutes. Also, there is an increased interest in 'large-sccale NMF problems'. 'Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? ' >> Notice that the second and third term in the expression for KLD (Eq. 1) are normalization terms such that we don't require V or Z to sum to unity. This very common in the NMF literature, and was motivated in a.o. [1]. KLD is appropriate if the data follow a (mixture of) Poisson distribution. While this is realistic for counts data (like in the Newsgroup corpus), the KLD is also applied on Fourier spectra, e.g. for speaker separation or speech enhancement, with success. Imho, the relevance of KLD does not need to be motivated in a paper on algorithms, see also [18] and [20] ( numbering in the new paper). 'I understand that this is a formulation used in previous work ([1]), but it should be briefly explained. ' >> Added a sentence about the Poisson hypothesis after Eq. 1. 'You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different? ' >> Reformulated. [17] (now [18]) uses a totally different motivation, but also involves the second order derivatives, like a Newton method. 'Has a diagonal Newton-type approach ever been used for the squared error objective? ' >> A reference is given now. Note however that KLD behaves substantially different. 'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)' 'the variables required to compute' -> 'the quantities required to compute' >> corrected You should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence? >> Yes. A lot better. Corrected. 'Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal. ' >> yes. I initially tried to control the damping by adding lambda*I to the Hessian, where lambda is decreased on success and increased if the KLD increases. I found it difficult to find a setting that worked well on a variety of problems. I would recommend using ' op' to denote matrix transposition instead of what you are doing. Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. >> That's indeed what I meant to say. 'is' => 'can be' You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. >> I meant this holds for any solution of (5). This is corrected. And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!). >> A Newton update may yield a guess that does not satisfy this (weak) necessary condition. We can satisfy this condition easily with the renormalization (10), which is reflected in steps 16 and 29. I don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct. 'floor resp. ceiling'?? >> 'Therefore' => 'To respect the nonnegativity and to avoid the singularity” Reply to Anonymous 4322 See comparison described above. I added more about the differences with the prior work you mention. Reply to Anonymous 482c See also comparison data detailed above. You are right there is a lot of generic work on Hessian preconditioning. I refer to papers that work on damping and line search in the context of NMF ([10], [11], [12], [14] ...). Diagonalization is only related in the sense that it ensures the Hessian to be positive definite (not in general, but here is does).
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
FFkZF49pZx-pS
review
1,362,210,360,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "anonymous reviewer 4322" ]
ICLR.cc/2013/conference
2013
title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization review: Summary: The paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal approximation of Newton's method for solving the KKT conditions of the NMF optimization problem. This approximation results in a multiplicative update that is computationally light. Since the objective function might increase under the Newton updates, the author proposes to simultaneously compute both multiplicative and Newton updates and choose the one that produces the largest descent. The algorithm is tested on several datasets, generally producing improvements in both number of iterations and computational time with respect to the standard multiplicative updates. I believe that the paper is well written. It proposes an efficient optimization algorithm for solving a problem that is not novel but very important in many applications. The author should highlight the strengths of the proposed approach and the differences with recent works presented in the literature. Pros.: - the paper addresses an important problem in matrix factorization, extensively used in audio processing applications - the experimental results show that the method is more efficient than the multiplicative algorithm (which is the most widely used optimization tool), without significantly increasing the algorithmic complexity Cons: - experimental comparisons against related approaches is missing - this approach seems limited to only work for the Kullback-Leiber divergence as fitting cost. General comments: I believe that the paper lacks of experimental comparisons with other accelerated optimization schemes for solving the same problem. In particular, I believe that the author should include comparisons with [17] and the work, C.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD, pages 1064–1072, 2011. which should also be cited. As the author points out, the approach in [17] is very similar to the one proposed in this paper (they have code available online). The work by Hsieh and Dhillon is also very related to this paper. They propose a coordinate descent method using Newton's method to solve the individual one-variable sub-problems. More details on the differences with these two works should be provided in Section 1. The experimental setting itself seems convincing. Figures 2 and 3 are never cited in the paper.
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
MqwZf2jPZCJ-n
review
1,363,744,920,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "Hugo Van hamme" ]
ICLR.cc/2013/conference
2013
review: First: sorry for the multiple postings. Browser acting weird. Can't remove them ... Update: I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv): 1) you have to be careful with initialization. If the estimates for W or H are too large, E = A - WH could potentially contain too many zeros in line 3 and the update maps H to all zeros. Solution: I first perform a multiplicative update on W and H so you have reasonably scaled estimates. 2) line 16 is wrongly implemented in the publicly available ffhals5.m I reran the comparison (different machine though - the one I used before was fully loaded): 1) CCD (ref [17]) - the c++ code compiled to a matlab mex file as downloaded from the author's website and following their instructions. 2) DNA - fully implemented in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/ 3) SBCD (ref [18]) - code fully in matlab with mods above 4) MU (multiplicative updates) - implementation fully in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/ The KLD as a function of the iteration for the rank-10 random 1000x500 matrix is shown in https://dl.dropbox.com/u/915791/iteration.pdf. We observe that SBCD takes a good start but then slows down. DNA is best after the 5th iteration. The KLD as a function of CPU time is shown in https://dl.dropbox.com/u/915791/time.pdf DNA is the clear winner, followed by MU which beats both SBCD and CCD. This may be surprising, but as I mentioned earlier, there are some implementation issues. CCD is a single-thread implementation, while matlab is multi-threaded and works in parrallel. However, the cyclic updates in CCD are not very suitable for parallelization. The SBCD needs reimplementation, honestly. In summary, DNA does compare favourably to the state-of-the-art, but I don't really feel comfortable about including such a comparison in a scientific paper if there is such a dominant effect of programming style/skills on the result.
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
oo1KoBhzu3CGs
review
1,362,192,540,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "anonymous reviewer 57f3" ]
ICLR.cc/2013/conference
2013
title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization review: This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional updating scheme ('multiplicative updates' from [1]) in the initial phase of optimization, with a diagonal Newton approach which is automatically switched to when it will help. This switching is accomplished by always computing both updates and taking whichever is best, which will typically be MU at the start and the more rapidly converging (but less stable) Newton method towards the end. Additionally, the diagonal Newton updates are made more stable using a few tricks, some of which are standard and some of which may not be. It is found that this can provide speed-ups which may be mild or significant, depending on the application, versus a standard approach which only uses multiplicative updates. As pointed out by the authors, Newton-type methods have been explored for non-negative matrix factorization before, but not for this particularly objective with a diagonal approximation (except perhaps [17]?). The writing is rough in a few places but okay overall. The experimental results seem satisfactory compared to the classical algorithm from [1], although comparisons to other potentially more recent approaches is conspicuously absent. I'm not an experiment on matrix factorization or these particular datasets so it's hard for me to independently judge if these results are competitive with state of the art methods. The paper doesn't seem particularly novel to me, but matrix factorization isn't a topic I find particularly interesting, so this probably biases me against the paper somewhat. Pros: - reasonably well presented - empirical results seem okay Cons: - comparisons to more recent approaches is lacking - it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes) - writing is rough in a few places Detailed comments: Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? I understand that this is a formulation used in previous work ([1]), but it should be briefly explained. You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different? Has a diagonal Newton-type approach ever been used for the squared error objective? 'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)' 'the variables required to compute' -> 'the quantities required to compute' You should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence? Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal. I would recommend using ' op' to denote matrix transposition instead of what you are doing. Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. Also, are these derivation prior work (e.g. from [1])? If so, it should be stated. It's hard to follow the derivations in this section. You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!). I don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct. 'floor resp. ceiling'??
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
aplzZcXNokptc
review
1,363,615,980,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "Hugo Van hamme" ]
ICLR.cc/2013/conference
2013
review: About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, “Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,” in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Diego, CA, USA, August 2011) using their software: the plots of the KLD as a function of iteration number and cpu time are located at https://dl.dropbox.com/u/915791/iteration.pdf and https://dl.dropbox.com/u/915791/time.pdf The data is the synthetic 1000x500 random matrix of rank 10. They show DNA has comparable convergence behaviour and the implementation is faster, despite it's matlab (DNA) vs. c++ (CCD).
i87JIQTAnB8AQ
The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
[ "Hugo Van hamme" ]
Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware.
[ "diagonalized newton algorithm", "nmf", "nonnegative matrix factorization", "data", "convergence", "matrix factorization", "popular machine", "many problems", "text mining" ]
https://openreview.net/pdf?id=i87JIQTAnB8AQ
https://openreview.net/forum?id=i87JIQTAnB8AQ
EW5mE9upmnWp1
review
1,362,382,860,000
i87JIQTAnB8AQ
[ "everyone" ]
[ "anonymous reviewer 482c" ]
ICLR.cc/2013/conference
2013
title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization review: Overview: This paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall-back mechanism to MU. At a minimal computational overhead, this is shown to be effective in a number of experiments. The paper is well-written, the experimental validation is convincing, and the author provides detailed pseudocode and a matlab implementation. Comments: There is a large body of related work outside of the NMF field that considers diagonal Hessian preconditioning of updates, going back (at least) as early as Becker & LeCun in 1988. Switching between EM and Newton update (using whichever is best, element-wise) is an interesting alternative to more classical forms of line search: it may be worth doing a more detailed comparison to such established techniques. I would appreciate a discussion of the potential of extending the idea to non KL-divergence costs.
qEV_E7oCrKqWT
Zero-Shot Learning Through Cross-Modal Transfer
[ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher Manning", "Andrew Y. Ng" ]
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
[ "model", "transfer", "objects", "images", "unseen classes", "work", "training data", "available", "necessary knowledge", "unseen categories" ]
https://openreview.net/pdf?id=qEV_E7oCrKqWT
https://openreview.net/forum?id=qEV_E7oCrKqWT
UgMKgxnHDugHr
review
1,362,080,640,000
qEV_E7oCrKqWT
[ "everyone" ]
[ "anonymous reviewer cfb0" ]
ICLR.cc/2013/conference
2013
title: review of Zero-Shot Learning Through Cross-Modal Transfer review: *A brief summary of the paper's contributions, in the context of prior work* This paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen category. If not, the model relies on a regular, state-of-the art supervised classifier to assign the image to known classes. Otherwise, it attempts to identify what this object is, based on a comparison between the image and each unseen class, in a learned joint image/class representation space. The method relies on pre-trained word representations, extracted from unlabelled text, to represent the classes. Experiments evaluate the compromise between classification accuracy on the seen classes and the unseen classes, as a threshold for identifying an unseen class is varied. *An assessment of novelty and quality* This paper goes beyond the current work on zero-shot learning in 2 ways. First, it shows that very good classification of certain pairs of unseen classes can be achieved based on learned (as opposed to hand designed) representations for these classes. I find this pretty impressive. The second contribution is in a method for dealing with seen and unseen classes, based on the idea that unseen classes are outliers. I've seen little work attacking directly this issue. Unfortunately, I'm not super impressed with the results: having to drop from 80% to 70% to obtain between 15% and 30% accuracy on unseen classes (and only for certain pairs) is a bit disappointing. But it's a decent first step. Plus, the proposed model is overall fairly simple, and zero-shot learning is quite challenging, so in fact it's perhaps surprising that a simple approach doesn't do worse. Finally, I find the paper reads well and is quite clear in its methodology. I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. This sentence suggests there is a theoretical contribution to this work, which I don't see. So I would remove that sentence. Also, the second paragraph of section 6 is incomplete. *A list of pros and cons (reasons to accept/reject)* The pros are: - attacks an important, very hard problem - goes significantly beyond the current literature on zero-shot learning - some of the results are pretty impressive The cons are: - model is a bit simple and builds quite a bit on previous work on image classification [6] and unsupervised learning of word representation [15] (but frankly, that's really not such a big deal)
qEV_E7oCrKqWT
Zero-Shot Learning Through Cross-Modal Transfer
[ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher Manning", "Andrew Y. Ng" ]
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
[ "model", "transfer", "objects", "images", "unseen classes", "work", "training data", "available", "necessary knowledge", "unseen categories" ]
https://openreview.net/pdf?id=qEV_E7oCrKqWT
https://openreview.net/forum?id=qEV_E7oCrKqWT
88s34zXWw20My
review
1,362,001,800,000
qEV_E7oCrKqWT
[ "everyone" ]
[ "anonymous reviewer 310e" ]
ICLR.cc/2013/conference
2013
title: review of Zero-Shot Learning Through Cross-Modal Transfer review: summary: the paper presents a framework to learn to classify images that can come either from known or unknown classes. This is done by first mapping both images and classes into a joint embedding space. Furthermore, the probability of an image being of an unknown class is estimated using a mixture of Gaussians. Experiments on CIFAR-10 show how performance vary depending on the threshold use to determine if an image is of a known class or not. review: - The idea of learning a joint embedding of images and classes is not new, but is nicely explained in the paper. - the authors relate to other works on zero-shot learning. I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class. - The proposed approach to estimate the probability that an image is of a known class or not is based on a mixture of Gaussians, where one Gaussian is estimated for each known class where the mean is the embedding vector of the class and the standard deviation is estimated on the training samples of that class. I have a few concerns with this: * I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that? * I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class. In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class. I agree it might work for a small number of well separate classes (like CIFAR-10), but I doubt it would work for problems with thousands of classes which obviously are more interconnected to each other. - I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case? - I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.
qEV_E7oCrKqWT
Zero-Shot Learning Through Cross-Modal Transfer
[ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher Manning", "Andrew Y. Ng" ]
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
[ "model", "transfer", "objects", "images", "unseen classes", "work", "training data", "available", "necessary knowledge", "unseen categories" ]
https://openreview.net/pdf?id=qEV_E7oCrKqWT
https://openreview.net/forum?id=qEV_E7oCrKqWT
ddIxYp60xFd0m
review
1,363,754,820,000
qEV_E7oCrKqWT
[ "everyone" ]
[ "Richard Socher" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their feedback. I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class. - Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be? * I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that? - We tried fitting a general covariance matrix and it decreases performance. * I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class. - It might be and we notice that different thresholds should be selected via cross validation. In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class. - Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions. I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case? - Using the distances to the word vectors of the unknown classes. I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space. reply. - The discriminative classifiers have much higher accuracy than the simple distances for known classes. I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. - Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).
qEV_E7oCrKqWT
Zero-Shot Learning Through Cross-Modal Transfer
[ "Richard Socher", "Milind Ganjoo", "Hamsa Sridhar", "Osbert Bastani", "Christopher Manning", "Andrew Y. Ng" ]
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
[ "model", "transfer", "objects", "images", "unseen classes", "work", "training data", "available", "necessary knowledge", "unseen categories" ]
https://openreview.net/pdf?id=qEV_E7oCrKqWT
https://openreview.net/forum?id=qEV_E7oCrKqWT
SSiPd5Rr9bdXm
review
1,363,754,760,000
qEV_E7oCrKqWT
[ "everyone" ]
[ "Richard Socher" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their feedback. I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class. - Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be? * I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that? - We tried fitting a general covariance matrix and it decreases performance. * I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class. - It might be and we notice that different thresholds should be selected via cross validation. In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class. - Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions. I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case? - Using the distances to the word vectors of the unknown classes. I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space. reply. - The discriminative classifiers have much higher accuracy than the simple distances for known classes. I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. - Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon).
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
eG1mGYviVwE-r
comment
1,363,730,760,000
Av10rQ9sBlhsf
[ "everyone" ]
[ "Alan L. Yuille, Roozbeh Mottaghi" ]
ICLR.cc/2013/conference
2013
reply: Okay, thanks. We understand your viewpoint.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
EHF-pZ3qwbnAT
review
1,362,609,900,000
ZhGJ9KQlXi9jk
[ "everyone" ]
[ "anonymous reviewer a9e8" ]
ICLR.cc/2013/conference
2013
title: review of Complexity of Representation and Inference in Compositional Models with Part Sharing review: This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about the layer below. The authors also study the computational complexity of this inference in various settings. I must say I very much like this paper. It proposes a model which combines fast and approximate inference (approximate in the sense that the global description of the scene lacks details) with a slower and exact inference (in the sense that it allows exact inference of the parts of the model). Since I am not familiar with the literature, I cannot however judge the novelty of the work. Pros: - model which attractively combines inference at the top level with inference at the lower levels - the analysis of the computational complexity for varying number of parts and objects is interesting - the work is very conjectural but I'd rather see it acknowledged than hidden under toy experiments. Cons:
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
sPw_squDz1sCV
review
1,363,536,060,000
ZhGJ9KQlXi9jk
[ "everyone" ]
[ "Aaron Courville" ]
ICLR.cc/2013/conference
2013
review: Reviewer c1e8, Please read the authors' responses to your review. Do they change your evaluation of the paper?
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
Rny5iXEwhGnYN
comment
1,362,095,760,000
p7BE8U1NHl8Tr
[ "everyone" ]
[ "Alan L. Yuille, Roozbeh Mottaghi" ]
ICLR.cc/2013/conference
2013
reply: The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done. The reviewer says that this result is not very novel and mentions analogies to complexity gain of large convolutional networks. This is an interesting direction to explore, but we are unaware of any mathematical analysis of convolutional networks that addresses these issues (please refer us to any papers that we may have missed). Since our analysis draws heavily on properties of compositional models -- explicit parts, executive summary, etc -- we are not sure how our analysis can be applied directly to convolutional networks. Certain aspects of our analysis also are novel to us -- e.g., the sharing of parts, the parallelization. In summary, although it is plausible that compositional models and convolutional nets have good scaling properties, we are unaware of any other mathematical results demonstrating this.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
O3uWBm_J8IOlG
comment
1,363,731,300,000
EHF-pZ3qwbnAT
[ "everyone" ]
[ "Alan L. Yuille, Roozbeh Mottaghi" ]
ICLR.cc/2013/conference
2013
reply: Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
Av10rQ9sBlhsf
comment
1,363,643,940,000
Rny5iXEwhGnYN
[ "everyone" ]
[ "anonymous reviewer c1e8" ]
ICLR.cc/2013/conference
2013
reply: Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'. The analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you have to organize the search algorithm in a manner that leverages this sharing. But I still like your paper because it gives 'a self-contained description of a sophisticated and conceptually sound object recognition system'. Although my personal vantage point makes the complexity result less surprising, the overall achievement is non trivial and absolutely worth publishing.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
oCzZPts6ZYo6d
review
1,362,211,680,000
ZhGJ9KQlXi9jk
[ "everyone" ]
[ "anonymous reviewer 915e" ]
ICLR.cc/2013/conference
2013
title: review of Complexity of Representation and Inference in Compositional Models with Part Sharing review: This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing. The intuition behind these models is that objects are composed of parts and that each of these parts can appear in many different objects; with sensible parallels (not mentioned explicitly by the authors) to typical sampling sets in image compression and to renormalization concepts in physics via model high-level executive summaries. The construction of hierarchical part dictionaries is an important and in my appreciation challenging prerequisite, but this is not the subject of the paper. The authors discuss an approach for object detection and object-position inference exploiting part sharing and dynamic programming, and evaluate its serial and parallel complexity. The paper gathers interesting concepts and presents intuitively-sound theoretical results that could be of interest to the ICLR community.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
p7BE8U1NHl8Tr
review
1,361,997,540,000
ZhGJ9KQlXi9jk
[ "everyone" ]
[ "anonymous reviewer c1e8" ]
ICLR.cc/2013/conference
2013
title: review of Complexity of Representation and Inference in Compositional Models with Part Sharing review: The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially describing how parts are composed. A distinctive feature of this model is the ability to support 'part sharing' because the same part model can be used by multiple objects and/or in various points of the object hierarchical description. Recognition is then achieved with a Viterbi search. The central point of the paper is to show how part sharing provides opportunities to reduce the computational complexity of the search because computations can be reused. This is analogous to the complexity gain of a large convolutional network over a sliding window recognizer of similar architecture. Although I am not surprised by this result, and although I do not see it as very novel, this paper gives a self-contained description of a sophisticated and conceptually sound object recognition system. Stressing the complexity reduction associated with part sharing is smart because the search complexity became a central issue in computer vision. On the other hand, the unsupervised learning of the part decomposition is not described in this paper (reference [19]) and could have been relevant to ICLR.
ZhGJ9KQlXi9jk
Complexity of Representation and Inference in Compositional Models with Part Sharing
[ "Alan Yuille", "Roozbeh Mottaghi" ]
This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers.
[ "inference", "complexity", "part", "representation", "compositional models", "objects", "terms", "serial computers", "parallel computers", "level" ]
https://openreview.net/pdf?id=ZhGJ9KQlXi9jk
https://openreview.net/forum?id=ZhGJ9KQlXi9jk
zV1YApahdwAIu
comment
1,362,352,080,000
oCzZPts6ZYo6d
[ "everyone" ]
[ "Alan L. Yuille, Roozbeh Mottaghi" ]
ICLR.cc/2013/conference
2013
reply: We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore.
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
qO9gWZZ1gfqhl
review
1,362,163,380,000
ttnAE7vaATtaK
[ "everyone" ]
[ "anonymous reviewer 777f" ]
ICLR.cc/2013/conference
2013
title: review of Indoor Semantic Segmentation using depth information review: Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results. Question: how does that compare to multi-scale max pooling CNN for a previous award-winning application, namely, segmentation of neuronal membranes (Ciresan et al, NIPS 2012)?
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
tG4Zt9xaZ8G5D
comment
1,363,298,100,000
Ub0AUfEOKkRO1
[ "everyone" ]
[ "Camille Couprie" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the total number of correctly classified pixels as a particular class, on the number of pixels belonging to this class in the dataset. For the pixel-wise accuracy, we now give the standard deviation in Table 1, as well as the median. As the two variances are equal using depth or not, we computed the statistical significance using a two sample t-test, that results in a t statistic equal to 1.54, which is far from the mean performance of 52.2 and thus we can consider that the two reported means are statistically significant. About the class-by class improvements displayed in Table 1, we discuss the fact that objects having a constant appearance of depth are in general more inclined to take benefit from depth information. As the major part of the scenes contains categories that respect this property, the improvements achieved using depth involve a smaller number of categories, but a larger volume of data. To strengthen our comparison of the two networks using or not depth information, we now display the results obtained using only the multiscale network without depth information in Figure 2. We hope that the changes that we made in the paper (which should be updated within the next 24 hours) answer your concerns.
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
OOB_F66xrPKGA
comment
1,363,297,980,000
2-VeRGGdvD-58
[ "everyone" ]
[ "Camille Couprie" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review and helpful comments. The missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman’s web page. We added the reference to the paper. In the paper, we made the observation that the classes for which depth fails to outperform the RGB model are the classes of object for which the depth map does not vary too much. We now stress out better this observation with the addition of some depth maps at Figure 2. The question you are raising about whether or not the depth is always useful, or if there could be better ways to leverage depth data is a very good question, and at the moment is still un-answered. The current RGBD multiscale network is the best way we found to learn features using depth, now maybe we could improve the system by introducing an appropriate contrast normalization of the depth map, or maybe we could combine the learned features using RGB and the learned features using RGBD…
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
Ub0AUfEOKkRO1
review
1,362,368,040,000
ttnAE7vaATtaK
[ "everyone" ]
[ "anonymous reviewer 5193" ]
ICLR.cc/2013/conference
2013
title: review of Indoor Semantic Segmentation using depth information review: This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension now that RGBD sensors are readily available. The incremental value of the depth channel is not entirely clear from this paper. In principle, the depth information should be valuable. However, Table 1 shows that for the majority of object types, the network that ignores depth is actually more accurate. Although the averages at the bottom of Table 1 show that depth-enhanced segmentation is slightly better, I suspect that if those averages included error bars (and they should), the difference would be insignificant. In fact, all the accuracies in Table 1 should have error bars on them. The comparisons with the work of Silberman et al. are more favorable to the proposed model, but again, the comparison would be strengthened by discussion of statistical confidence. Qualitatively, I would have liked to see the ouput from the convolutional network of Farabet et al. without the depth channel, as a point of comparison in Figures 2 and 3. Without that point of comparison, Figures 2 and 3 are difficult to interpret as supporting evidence for the model using depth. Pro(s) - establishes baseline RGBD results with convolutional networks Con(s) - quantitative results lack confidence intervals - qualitative results missing important comparison to non-rgbd network
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
VVbCVyTLqczWn
comment
1,363,297,440,000
qO9gWZZ1gfqhl
[ "everyone" ]
[ "Camille Couprie" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input to the network, they artificially blur the input's content, radially, and use non-uniform sampling to connect the network to it. The major advantage of using a pyramid is that the whole pyramid can be applied convolutionally, to larger input sizes. Once the model is trained, it must be applied as a sliding window to classify each pixel in the input. Using their method, which requires a radial blur centered on each pixel, the model cannot be applied convolutionally. This is a major difference, which dramatically impacts test time. Note: Ciseran's 2012 NIPS paper appeared after our first paper (ICML 2012) on the subject.
ttnAE7vaATtaK
Indoor Semantic Segmentation using depth information
[ "Camille Couprie", "Clement Farabet", "Laurent Najman", "Yann LeCun" ]
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
[ "depth information", "indoor scenes", "features", "indoor semantic segmentation", "work", "segmentation", "inputs", "area", "research" ]
https://openreview.net/pdf?id=ttnAE7vaATtaK
https://openreview.net/forum?id=ttnAE7vaATtaK
2-VeRGGdvD-58
review
1,362,213,660,000
ttnAE7vaATtaK
[ "everyone" ]
[ "anonymous reviewer 03ba" ]
ICLR.cc/2013/conference
2013
title: review of Indoor Semantic Segmentation using depth information review: This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that for most segmentation classes providing depth information to the conv net increases performance. The model simply adds depth as a separate channel to the existing RGB channels in a conv net. Depth has some unique properties e.g. infinity / missing values depending on the sensor. It would be nice to see some consideration or experiments on how to properly integrate depth data into the existing model. The experiments demonstrate that a conv net using depth information is competitive on the datasets evaluated. However, it is surprising that the model leveraging depth is not better in all cases. Discussion on where the RGB-D model fails to outperform the RGB only model would be a great contribution to add. This is especially apparent in table 1. Does this suggest that depth isn't always useful, or that there could be better ways to leverage depth data? Minor notes: 'modalityies' misspelled on page 1 Overall: - A straightforward application of conv nets to RGB-D data, yielding fairly good results - More discussion on why depth fails to improve performance compared to an RGB only model would strengthen the experimental findings
OpvgONa-3WODz
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
[ "Guillaume Desjardins", "Razvan Pascanu", "Aaron Courville", "Yoshua Bengio" ]
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.
[ "natural gradient", "boltzmann machines", "mfng", "algorithm", "similar", "spirit", "martens", "algorithm belongs", "family", "truncated newton methods" ]
https://openreview.net/pdf?id=OpvgONa-3WODz
https://openreview.net/forum?id=OpvgONa-3WODz
LkyqLtotdQLG4
review
1,362,012,600,000
OpvgONa-3WODz
[ "everyone" ]
[ "anonymous reviewer 9212" ]
ICLR.cc/2013/conference
2013
title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines review: The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher information matrix L with examples sampled from the model distribution using a MCMC approach with multiple chains. The gradient g is estimated from minibatches, and the weight update x is obtained by solving Lx=g with an efficient truncated algorithm. Doing so naively would be very costly because the matrix L is large. The trick is to express L as the covariance of the Jacobian S with respect to the model distribution and take advantage of the linear nature of the sample average to estimate the product Lw in a manner than only requires the storage of the Jacobien for each sample. This is a neat idea. The empirical results are preliminary but show promise. The proposed algorithm requires less iterations but more wall-clock time than SML. Whether this is due to intrinsic properties of the algorithm or to deficiencies of the current implementation is not clear.
OpvgONa-3WODz
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
[ "Guillaume Desjardins", "Razvan Pascanu", "Aaron Courville", "Yoshua Bengio" ]
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.
[ "natural gradient", "boltzmann machines", "mfng", "algorithm", "similar", "spirit", "martens", "algorithm belongs", "family", "truncated newton methods" ]
https://openreview.net/pdf?id=OpvgONa-3WODz
https://openreview.net/forum?id=OpvgONa-3WODz
o5qvoxIkjTokQ
review
1,362,294,960,000
OpvgONa-3WODz
[ "everyone" ]
[ "anonymous reviewer 7e2e" ]
ICLR.cc/2013/conference
2013
title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines review: This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is particularly useful because, indeed, many of the papers on information geometry are hard to follow. The derivations are also correct and sound. The derivations in the appendix are classical statistics results, but their addition is likely to improve readability of the paper. The experiments show that the natural gradient approach does better than stochastic maximum likelihood when plotting estimated likelihood against epochs. However, per unit computation, the stochastic maximum likelihood method still does better. I was not able to understand remark 4 about mini-batches. Why are more parallel chains needed? Why not simply use a single chain but have longer memory. I strongly think this part of the paper could be improved if the authors write down the pseudo-code for their algorithm. Another suggestion is to use automatic algorithm configuration to find the optimal hyper-parameters for each method, given that they are so close. The trade-offs of second order versus first order optimization methods are well known in the deterministic case. There is is also some theoretical guidance for the stochastic case. I encourage the authors to look at the following papers for this: A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. N. Le Roux, M. Schmidt, F. Bach. NIPS, 2012. Hybrid Deterministic-Stochastic Methods for Data Fitting. M. Friedlander, M. Schmidt. SISC, 2012. 'On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning' R. Byrd, G. Chin and W. Neveitt, J. Nocedal. SIAM J. on Optimization, vol 21, issue 3, pages 977-995 (2011). 'Sample Size Selection in Optimization Methods for Machine Learning' R. Byrd, G. Chin, J. Nocedal and Y. Wu. to appear in Mathematical Programming B (2012). In practical terms, given that the methods are so close, how does the choice of implementation (GPUs, multi-cores, single machine) affect the comparison? Also, how data dependent are the results. I would be nice to gain a deeper understanding of the conditions under which the natural gradient might or might not work better than stochastic maximum likelihood when training Boltzmann machines. Finally, I would like to point out a few typos to assist in improving the paper: Page 1: litterature should be literature Section 2.2 cte should be const for consistency. Section 3: Avoid using x instead of grad_N in the linear equation for Lx=E(.) This causes overloading. For consistency with the previous section, please use grad_N instead. Section 4: Add a space between MNIST and [7]. Appendix 5.1: State that the expectation is with respect to p_{ heta}(x). Appendix 5.2: The expectation with respect to q_ heta should be with respect to p_{ heta}(x) to ensure consistency of notation, and correctness in this case. References: References [8] and [9] appear to be duplicates of the same paper by J. Martens.
OpvgONa-3WODz
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
[ "Guillaume Desjardins", "Razvan Pascanu", "Aaron Courville", "Yoshua Bengio" ]
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.
[ "natural gradient", "boltzmann machines", "mfng", "algorithm", "similar", "spirit", "martens", "algorithm belongs", "family", "truncated newton methods" ]
https://openreview.net/pdf?id=OpvgONa-3WODz
https://openreview.net/forum?id=OpvgONa-3WODz
dt6KtywBaEvBC
review
1,362,379,800,000
OpvgONa-3WODz
[ "everyone" ]
[ "anonymous reviewer 77a7" ]
ICLR.cc/2013/conference
2013
title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines review: This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is used to train a DBM, faster than the baseline algorithm in terms of epochs needed, but at the cost of a computational slowdown (about a factor 30). The paper is well-written, the algorithm is novel, although not fundamentally so. In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick. Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better). Other comments: * Remove the phrase 'first principles', it is not applicable here. * Is there a good reason to limit section 2.1 to a discrete and bounded domain X? * I'm not a big fan of the naming a method whose essential ingredient is a metric 'Metric-free' (I know Martens did the same, but it's even less appropriate here). * I doubt the derivation in appendix 5.1 is a new result, could be omitted. * Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported. * Results should be averaged over multiple runs, and error-bars given. * The authors could clarify how the algorithm complexity scales with problem dimension, and where the computational bottleneck lies, to help the reader judge its promise beyond the current results. * A pity that it took longer than 6 weeks for the promised 'next revision', I had hoped the authors might resolve some of the self-identified weaknesses in the meanwhile.
OpvgONa-3WODz
Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
[ "Guillaume Desjardins", "Razvan Pascanu", "Aaron Courville", "Yoshua Bengio" ]
This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive.
[ "natural gradient", "boltzmann machines", "mfng", "algorithm", "similar", "spirit", "martens", "algorithm belongs", "family", "truncated newton methods" ]
https://openreview.net/pdf?id=OpvgONa-3WODz
https://openreview.net/forum?id=OpvgONa-3WODz
pC-4pGPkfMnuQ
review
1,363,459,200,000
OpvgONa-3WODz
[ "everyone" ]
[ "Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
review: Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work. To all reviewers:In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MFNG, and all around lower memory consumption. Unfortunately, this has delayed the submission of a new manuscript, which is still under preparation. The focus of this new revision will be on: (1) reporting mean and standard deviations of Fig.1 across multiple seeds. (2) a more careful use of damping and the use of annealed learning rates. (3) results on a second dataset, and hopefully a second model family (Gaussian RBMs). In the meantime, we have uploaded a new version which aims to clarify and provide additional technical details, where the reviewers had found it necessary. The main modifications are: * a new algorithmic description of MFNG * a new graph which analyzes runtime performance of the algorithm, breaking down the run-time performance between the various steps of the algorithm (sampling, gradient computation, matrix-vector product, and MinRes iterations). The paper should appear shortly on arXiv, and can be accessed here in the meantime: http://brainlogging.files.wordpress.com/2013/03/iclr2013_submission1.pdf An open-source implementation of MFNG can be accessed at the following URL. https://github.com/gdesjardins/MFNG.git To Anonymous 7e2e:There are numerous advantages to sampling from parallel chains (with fewer Gibbs steps between samples), compared to using consecutive (or sub-sampled) samples generated by a single Markov chain. First, running multiple chains guarantees that the samples are independent. Running a single chain will no doubt result in correlated samples which will negatively impact our estimates of the gradient and the metric. Second, simulating multiple chains is an implicitly parallel process, which can be implemented efficiently on both CPU and GPU (especially so on GPU). The downside however is in increase in memory consumption. To Anonymous 77a7: >> In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick. Since ours is a natural gradient method, it attenuates the effect of ill-conditioned probability manifolds (expected hessian of log Z, under the model distribution), not ill-conditioning of the expected hessian (under the empirical distribution). It is thus possible that centering addresses the latter form of ill-conditioning. Another hypothesis is that centering provides a better initialization point, around which the natural gradient metric is better-conditioned and thus easier to invert. More experiments are required to answer these questions. >> Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better). Our source code and algorithmic description should shed some light on this issue. The only 'trick' we currently use is a fixed damping coefficient along the diagonal, to improve conditioning and speed up convergence of our solver. Alternative forms of initialization and preconditioning were not used in the experiments. >> Is there a good reason to limit section 2.1 to a discrete and bounded domain chi? These limitations mostly reflect our interest with Boltzmann Machines. Generalizing these results to unbounded domains (or continuous variables) remains to be investigated. >> Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported. The results of our grid-search have been added to the caption of Figure 1.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
d6u7vbCNJV6Q8
review
1,361,968,020,000
yyC_7RZTkUD5-
[ "everyone" ]
[ "anonymous reviewer ac47" ]
ICLR.cc/2013/conference
2013
title: review of Deep Predictive Coding Networks review: Deep predictive coding networks This paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal consistency information between time steps, and causes which are the latent variables inferred from the input image that attempt to explain what is in the image. Pros: Somewhat interesting filters are learned in the second layer of the model, though these have been shown in prior work. Noise reduction on the toy images seems reasonable. Cons: The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms. The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity. The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model. Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently. It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features. Where did z come from in equation 11? It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames. Novelty and Quality: The paper introduces a novel extension to hierarchical sparse coding method by incorporating temporal information at each layer of the model. The poor explanation of this relatively simple idea holds the paper back slightly.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
Xu4KaWxqIDurf
review
1,363,393,200,000
yyC_7RZTkUD5-
[ "everyone" ]
[ "Rakesh Chalasani, Jose C. Principe" ]
ICLR.cc/2013/conference
2013
review: The revised paper is uploaded onto arXiv. It will be announced on 18th March. In the mean time, the paper is also made available at https://www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
00ZvUXp_e10_E
comment
1,363,392,660,000
EEhwkCLtAuko7
[ "everyone" ]
[ "Rakesh Chalasani, Jose C. Principe" ]
ICLR.cc/2013/conference
2013
reply: Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised. >>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?' We use the generalized state-space equations in Eq.1 and Eq.2 to motivate the relation between the proposed model and dynamic networks. However, please note that it is difficult to state the explicit form of F and G, since sparsity constraint even on a linear dynamical system leads to a non-linear mapping between the observations and the states. >>> 'In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.' Agreed. We made appropriate changes in the revised paper. >>> The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t) We made this explicit in the revised paper. Eq.3 represents the energy function for inferring the x_t with fixed u_t and Eq.4 represents the energy function for inferring the u_t with fixed x_t. In order to be more clear, we now wrote a unified energy function (Eq. 5) from which we jointly infer both x_t and u_t. >>> 'Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.' Agreed, this statement is incorrect and is removed. >>> 'Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?' Please note that the states from the 4 overlapping patches are pooled using a non-linear function (sum of the absolute value of the state vectors). Hence, the output is no longer a linear mapping. >>> 'Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?' No, we simply consider the concatenated sequence as a single video. This is made more clear in the paper.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
iiUe8HAsepist
comment
1,363,392,180,000
d6u7vbCNJV6Q8
[ "everyone" ]
[ "Rakesh Chalasani, Jose C. Principe" ]
ICLR.cc/2013/conference
2013
reply: Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised. >>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.' We have made major changes to the paper to improve the presentation of the model. Hopefully the newer version will make the explanation more clear. We would also like to emphasis that the paper makes two important contributions: (1) as you have pointed out, introduces sparse coding in dynamical models and solves it using a novel inference procedure similar to ISTA. (2) considers top-down information while performing inference in the hierarchical model. >>> 'The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity.' We agree with you that it would strengthen our arguments by showing denoising on large images or videos. However, to scale this model to large images require convolutional network like model. This is an on going work and we are presently developing a convolutional model for DPCN. >>> 'The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.' Corrected. The points between the clusters appear because we enforce temporal coherence on the causes belonging two consecutive frames at the top layer (see Section 2.4). It is not due to gradual change in the sequences, as said previously. >>> 'Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.' Corrected. Also, a new figure is added to bring more clarity. >>> 'It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.' Please see the supplementary material, section A.4 for visualization of the first layer parameters A, B and C. Also, please note that the Figure. 2 shows the visualization of the invariant matrices, B, in a two-layered network. These are obtained by taking the linear combination of Gabor like filters in C^(1) (see Figure .6) and hence, represent more complex structures. This is made more clear in the paper. >>> 'Where did z come from in equation 11?' Corrected. It is the Gaussian transition noise over the parameters. >>> 'It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.' We say the states are more temporally 'consistent' to indicate that they are more stable than sparse coding, particularly in high sparsity conditions, because they have to maintain the temporal dependencies. On the other hand, we agree with you that the causes are more invariant to changes in the input and hence, are temporally 'coherent'.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
EEhwkCLtAuko7
review
1,362,405,300,000
yyC_7RZTkUD5-
[ "everyone" ]
[ "anonymous reviewer 62ac" ]
ICLR.cc/2013/conference
2013
title: review of Deep Predictive Coding Networks review: This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to pursue. I like the connection to dynamical systems. The mathematical presentation is disorganized though, and it would have been nice to see some sort of benchmark or externally meaningful quantitative comparison in the experimental results. More specific comments: You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions? In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper. The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t) Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear. Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch? Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
o1YP1AMjPx1jv
comment
1,363,393,020,000
Za8LX-xwgqXw5
[ "everyone" ]
[ "Rakesh Chalasani, Jose C. Principe" ]
ICLR.cc/2013/conference
2013
reply: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised. >>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model' We made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper. >>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.' We agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. >>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. >>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable. >>> What is z_{t} in Equation 11?' Corrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. >>> 'In Section 2.2, its not clear how u_{hat} is computed. ' This is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
XTZrXGh8rENYB
comment
1,363,393,320,000
3vEUvBbCrO8cu
[ "everyone" ]
[ "Rakesh Chalasani" ]
ICLR.cc/2013/conference
2013
reply: This is in reply to reviewer 1829, mistakenly pasted here. Please ignore.
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
Za8LX-xwgqXw5
review
1,362,498,780,000
yyC_7RZTkUD5-
[ "everyone" ]
[ "anonymous reviewer 1829" ]
ICLR.cc/2013/conference
2013
title: review of Deep Predictive Coding Networks review: A brief summary of the paper's contributions, in the context of prior work. The paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information (from high-level contextual/causal units) can modulate the states and observations in lower layers. Experiments were conducted on a natural video dataset, and on a synthetic video dataset with moving geometric shapes. On the natural video dataset, the learned receptive fields represent edge detectors in the first layer, and higher-level concepts such as corners and junctions in the second layer. In the synthetic sequence dataset, hierarchical top-down inference is used to robustly infer about “causal” units associated with object shapes. An assessment of novelty and quality. This work can be viewed as a novel extension of hierarchical sparse coding to temporal data. Specifically, it is interesting to see how to incorporate dynamical systems into sparse hierarchical models (that alternate between state units and causal units), and how the model can perform bottom-up/top-down inference. The use of Nestrov’s method to approximate the non-smooth state transition terms in equation 5 is interesting. The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model (also, see comments below). The experimental results (identifying high-level causes from corrupted temporal data) seem quite reasonable on the synthetic dataset. However, the results are all too qualitative. The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets. Other questions and comments: - In the beginning of the section 2.1, please define P, D, K to improve clarity. - In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable. - What is z_{t} in Equation 11? - In Section 2.2, it’s not clear how u_hat is computed. A list of pros and cons (reasons to accept/reject). Pros: - The formulation and the proposed solution are technically interesting. - Experimental results on a synthetic video data set provide a proof-of-concept demonstration. Cons: - The significance of the experiments is quite limited. There is no empirical comparison to other models on real tasks. - Inference seems to be complicated and computationally expensive. - Unclear presentation
yyC_7RZTkUD5-
Deep Predictive Coding Networks
[ "Rakesh Chalasani", "Jose C. Principe" ]
The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.
[ "model", "networks", "priors", "deep predictive", "predictive", "quality", "data representation", "deep learning methods", "prior model", "representations" ]
https://openreview.net/pdf?id=yyC_7RZTkUD5-
https://openreview.net/forum?id=yyC_7RZTkUD5-
3vEUvBbCrO8cu
review
1,363,392,960,000
yyC_7RZTkUD5-
[ "everyone" ]
[ "Rakesh Chalasani, Jose C. Principe" ]
ICLR.cc/2013/conference
2013
review: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised. >>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model' We made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper. >>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.' We agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN. >>>'In the beginning of the section 2.1, please define P, D, K to improve clarity. >>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable. >>> What is z_{t} in Equation 11?' Corrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters. >>> 'In Section 2.2, its not clear how u_{hat} is computed. ' This is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}.
zzEf5eKLmAG0o
Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
[ "YoonSeop Kang", "Seungjin Choi" ]
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.
[ "features", "exponential family harmoniums", "graphical model", "feature extraction", "structure", "better representation", "data distribution", "model", "harmonium", "parameters" ]
https://openreview.net/pdf?id=zzEf5eKLmAG0o
https://openreview.net/forum?id=zzEf5eKLmAG0o
UUlHmZjBOIUBb
review
1,362,353,160,000
zzEf5eKLmAG0o
[ "everyone" ]
[ "anonymous reviewer d966" ]
ICLR.cc/2013/conference
2013
title: review of Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums review: The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-specific nodes. It builds on the previous multi-view harmonium (MVH) algorithm by adding (sigmoidal) switch parameters that turn a connection on or off between a view and hidden node and uses gradient descent to learn those switch parameters. The optimization is similar to MVH, with a slight modification on the joint distribution between views and hidden nodes, resulting in a change in the gradients for all parameters and a new switch variable to descend on. This new algorithm, therefore, is somewhat novel; the quality of the explanation and writing is high; and the experimental quality is reasonable. Pros 1. The paper is well-written and organized. 2. The algorithm in the paper proposes a way to avoid hand designing shared and private (view-specific) nodes, which is an important contribution. 3. The experimental results indicate some interesting properties of the algorithm, in particular demonstrating that the algorithm extracts reasonable shared and view-specific hidden nodes. Cons 1. The descent directions have W and the switch parameters, s_kj, coupled, which might make learning slow. Experimental results should indicate computation time. 2. The results do not have error bars (in Table 1), so it is unclear if they are statistically significant (the small difference suggests that they may not be). 3. The motivation in this paper is to enable learning of the private and shared representations automatically. However, DWH (only a shared representation) actually seems to perform generally better that MVH (shared and private). The experiments should better explore this question. It might also be a good idea to have a baseline comparison with CCA. 4. In light of Con (3), the algorithm should also be compared to multi-view algorithms that learn only shared representations but do not require the size of the hidden-node set to be fixed (such as the recent relaxed-rank convex multi-view approach in 'Convex Multiview Subspace Learning', M. White, Y. Yu, X. Zhang and D. Schuurmans, NIPS 2012). In this case, the relaxed-rank regularizer does not fix the size of the hidden node set, but regularizes to set several hidden nodes to zero. This is similar to the approach proposed in this paper where a node is not used if the sigmoid value is < 0.5. Note that these relaxed-rank approaches do not explicitly maximize the likelihood for an exponential family distribution; instead, they allow general Bregman divergences which have been shown to have a one-to-one correspondence with exponential family distributions (see 'Clustering with Bregman divergences' A. Banerjee, S. Merugu, I. Dhillon and J. Ghosh, JMLR 2005). Therefore, by selecting a certain Bregman divergence, the approach in this paper can be compared to the relaxed-rank approaches.
zzEf5eKLmAG0o
Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
[ "YoonSeop Kang", "Seungjin Choi" ]
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.
[ "features", "exponential family harmoniums", "graphical model", "feature extraction", "structure", "better representation", "data distribution", "model", "harmonium", "parameters" ]
https://openreview.net/pdf?id=zzEf5eKLmAG0o
https://openreview.net/forum?id=zzEf5eKLmAG0o
tt7CtuzeCYt5H
comment
1,363,857,240,000
DNKnDqeVJmgPF
[ "everyone" ]
[ "YoonSeop Kang" ]
ICLR.cc/2013/conference
2013
reply: 1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper. 2. It is true that the separation between digit features and noises in our model is not perfect. But it is also true that view-specific features contain more noisy features than the shared ones. We appreciate your suggestions about the additional experiments about de-noising digits, and we will present the result of the experiments if we get a chance.
zzEf5eKLmAG0o
Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
[ "YoonSeop Kang", "Seungjin Choi" ]
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.
[ "features", "exponential family harmoniums", "graphical model", "feature extraction", "structure", "better representation", "data distribution", "model", "harmonium", "parameters" ]
https://openreview.net/pdf?id=zzEf5eKLmAG0o
https://openreview.net/forum?id=zzEf5eKLmAG0o
qqdsq7GUspqD2
comment
1,363,857,540,000
UUlHmZjBOIUBb
[ "everyone" ]
[ "YoonSeop Kang" ]
ICLR.cc/2013/conference
2013
reply: 1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH. 2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results. 3. MVHs are often outperformed by DWHs unless the sizes of latent node sets are not carefully chosen, and this is one of the most important reason for introducing switch parameters. To make our motivation clear, we assigned 50% of hidden nodes as shared, and evenly assigned the rest of hidden nodes as visible nodes for view-specific nodes of each view. We didn't compare our method to CCA, because we thought DWH would be a better example of models with only a shared representation. 4. We were not aware of the White et al.'s work when we submitted our work, and therefore couldn't make comparison with their model.
zzEf5eKLmAG0o
Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums
[ "YoonSeop Kang", "Seungjin Choi" ]
We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods.
[ "features", "exponential family harmoniums", "graphical model", "feature extraction", "structure", "better representation", "data distribution", "model", "harmonium", "parameters" ]
https://openreview.net/pdf?id=zzEf5eKLmAG0o
https://openreview.net/forum?id=zzEf5eKLmAG0o
DNKnDqeVJmgPF
review
1,360,866,060,000
zzEf5eKLmAG0o
[ "everyone" ]
[ "anonymous reviewer 0e7e" ]
ICLR.cc/2013/conference
2013
title: review of Learning Features with Structure-Adapting Multi-view Exponential Family Harmoniums review: The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmonium (MVH) (Kang&Choi, 2011) where hidden units were separated into a shared set and view-specific sets. Unlike MVH which explicitly restricts edges, the visible and hidden units in the proposed SA-MVH are fully connected to each other with switch parameters s_{kj} indicating how likely the j-th hidden unit corresponds to the k-th view. It would have been better if the distribution of s_{kj}'s (or sigma(s_{kj})) was provided. Unless the distribution has clear modes near 0 and 1, it would be difficult to tell why this approach of learning w^{(k)}_{ij} and s_{kj} separately is better than just learning ilde{w}^{(k)}_{ij} = w^{(k)}_{ij} sigma s_{kj} all together (as in dual-wing harmonium, DWH). Though, the empirical results (experiment 2) show that the features extracted by SA-MVH outperform both MVH and DWH. The visualizations of shared and view-specific features from the first experiment do not seem to clearly show the power of the proposed method. For instance, it's difficult to say that the filters of roman digits from the shared features do seem to have horizontal noise. It would be better to try some other tasks with the trained model. Would it be possible to sample clean digits (without horizontal or vertical noise) from the model if the view-speific features were forced off? Would it be possible to denoise the corrupted digits? and so on.. Typo: - Fig. 1 (c): sigma(s_{1j}) and sigma(s_{2j})
README.md exists but content is empty.
Downloads last month
22