forum_id
stringlengths 8
20
| forum_title
stringlengths 1
899
| forum_authors
sequencelengths 0
174
| forum_abstract
stringlengths 0
4.69k
| forum_keywords
sequencelengths 0
35
| forum_pdf_url
stringlengths 38
50
| forum_url
stringlengths 40
52
| note_id
stringlengths 8
20
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,737B
| note_replyto
stringlengths 4
20
| note_readers
sequencelengths 1
8
| note_signatures
sequencelengths 1
2
| venue
stringclasses 349
values | year
stringclasses 12
values | note_text
stringlengths 10
56.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%. | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | OgesTW8qZ5TWn | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that our model also outperforms this model in terms of ranking & classification.
- We didn't report the results on the original data because of the issues of overlap between training and testing set.
80.23% of the examples in the testing set appear exactly in the training set.
99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric.
Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.
- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking.
- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper.
- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'. |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%. | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | PnfD3BSBKbnZh | review | 1,362,079,260,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 75b8"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: - A brief summary of the paper's contributions, in the context of prior work.
This paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relationship type. The energy function is based on a so-called Neural Tensor Network, which essentially introduces a bilinear term in the computation of the hidden layer input activations of a single hidden layer neural network. A favorable comparison with the energy-function proposed in Bordes et al. 2011 is presented.
- An assessment of novelty and quality.
This work follows fairly closely the work of Border et al. 2011, with the main difference being the choice of the energy/scoring function. This is an advantage in terms of the interpretability of the results: this paper clearly demonstrates that the proposed energy function is better, since everything else (the training objective, the evaluation procedure) is the same. This is however a disadvantage in terms of novelty as this makes this work somewhat incremental.
Bordes et al. 2011 also proposed an improved version of their model, using kernel density estimation, which is not used here. However, I suppose that the proposed model in this paper could also be similarly improved.
More importantly, Bordes and collaborators have more recently looked at another type of energy function, in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012), which also involves bilinear terms and is thus similar (but not the same) as the proposed energy function here. In fact, the Bordes et al. 2012 energy function seems to outperform the 2011 one (without KDE), hence I would argue that the former would have been a better baseline for comparisons.
- A list of pros and cons (reasons to accept/reject).
Pros: Clear demonstration of the superiority of the proposed energy function over that of Bordes et al. 2011.
Cons: No comparison with the more recent energy function of Bordes et al. 2012, which has some similarities to the proposed Neural Tensor Networks.
Since this was submitted to the workshop track, I would be inclined to have this paper accepted still. This is clearly work in progress (the submitted paper is only 4 pages long), and I think this line of work should be encouraged. However, I would suggest the authors also perform a comparison with the scoring function of Bordes et al. 2012 in future work, using their current protocol (which is nicely setup so as to thoroughly compare energy functions). |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%. | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | yA-tyFEFr2A5u | review | 1,362,246,000,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 7e51"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-based scoring function, which is trained to assign high score to plausible relations. Evaluation is performed on Wordnet.
The main differences w.r.t. (Bordes et al., AAAI11) is the scoring function, which now involves a tensor product to encode for the relation type and the use of a non-linearity. It would be interesting if the authors could comment the motivations of their architecture. For instance, what does the tanh could model here?
The experiments raise some questions:
- why do not also report the results on the original data set of (Bordes et al., AAAI11)? Even, is the data set contains duplicates, this stills makes a reference point.
- the classification task is hard to motivate. Link prediction is a problem of detection: very few positive to find in huge set of negative examples. Transform that into a balanced classification problem is a non-sense to me.
There have been several follow-up works to (Bordes et al., AAAI11) such as (Bordes et al., AISTATS12) or (Jenatton et al., NIPS12), that should be cited and discussed (some of those involve tensor for coding the relation type as well). Besides, they would also make the experimental comparison stronger.
It should be explained how the pre-trained word vectors trained by the model of Collobert & Weston are use in the model. Wordnet entities are senses and not words and, of course, there is no direct mapping from words to senses. Which heuristic has been used?
Pros:
- better experimental results
Cons:
- skinny experimental section
- lack of recent references |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 82.8%. | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | 7jyp7wrwSzagb | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that our model also outperforms this model in terms of ranking & classification.
- We didn't report the results on the original data because of the issues of overlap between training and testing set.
80.23% of the examples in the testing set appear exactly in the training set.
99.23% of the examples have e1 and e2 'connected' via some relation in the training set. Some relationships such as 'is similar to' are symmetric.
Furthermore, we can reach 92.8% of top 10 accuracy (instead of 76.7% in the original paper) using their model.
- The classification task can help us predict whether a relationship is correct or not, thus we report both the results of classification and ranking.
- To use the pre-trained word vectors, we ignore the senses of the entities in Wordnet in this paper.
- The experiments section is short because we tried to keep the paper's length close to the recommended length. From the ICLR website: 'Papers submitted to this track are ideally 2-3 pages long'. |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | rGZJRE7IJwrK3 | review | 1,392,852,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Charles Martin"
] | ICLR.cc/2013/conference | 2013 | review: It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in
E. J. Candès, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40.
where it was noted that the multi-scale basis suggested in this convex optimization approach is equivalent to the Wilson basis from his original work on RG theory in the 1970s |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 4Uh8Uuvz86SFd | comment | 1,363,212,060,000 | 7to37S6Q3_7Qe | [
"everyone"
] | [
"Cédric Bény"
] | ICLR.cc/2013/conference | 2013 | reply: I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4.
In order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a nearly self-contained explanation of what the model is for a specific one-dimensional example. The content of section 2 essentially explains what features must be kept for any generalization, and section 3 clarifies why these features are important.
Concerning the second issue, I agree that this work is preliminary, and implementation is the next step. |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7to37S6Q3_7Qe | review | 1,362,321,600,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer 441c"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerical experiments showing that it works.
What it needs is something like:
1) Define the DMRG (or whatever verion of RG you need) and Define the machine learning model. Do these with explicit formulas so reader can know what exactly they are. Things like 'Instead, we only allow for maps πj which are local in two important ways: firstly, each input vertex can only causally influence the values associated with the m output vertices that it represents plus all kth degree neighbors of these, where k would typically be small' are very hard to follow.
2) Show the mapping between the two models.
3) Show what it does on real data and that it does something interesting and/or useful. (Real data e.g. sound signals, images, text,...) |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | tb0cgaJXQfgX6 | review | 1,363,477,320,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer 441c,
Have you taken a look at the new version of the paper? Does it go some way to addressing your concerns? |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7Kq-KFuY-y7S_ | review | 1,365,121,080,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods.
There is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagation algorithm for inference minimizes the Bethe free energy (an approximation of the free energy in which only pairwise interactions are taken into account and high-order interactions are ignored). More generally, variational methods inspired by statistical physics have been a very popular topic in graphical model inference.
The renormalization methods could be relevant to deep architectures in the sense that the grouping of random variable resulting from a change of scale could be be made analogous with the pooling and subsampling operations often used in deep models.
It's an interesting idea, but it will probably take more work (and more tutorial expositions of RG) to catch the attention of this community. |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG---the multiscale entanglement renormalization ansatz (MERA)---and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption---common in physics---that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | Qj1vSox-vpQ-U | review | 1,362,219,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer acf4"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the paper. The paper contains a non-technical and insightful exposition of concepts and discusses a learning algorithm for stochastic networks based on the `multiscale entanglement renormalization ansatz' (MERA). This contribution will potentially evoke the interest of many readers. |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions. | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | ff2dqJ6VEpR8u | review | 1,362,252,900,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 5a78"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: In the deep learning community there has been a recent trend in
moving away from the traditional sigmoid/tanh activation function to
inject non-linearity into the model. One activation function that has
been shown to work well in a number of cases is called Rectified
Linear Unit (ReLU).
Building on the prior research, this paper aims to provide an
analysis of what is going on while training networks using these
activation functions, and why do they work. In particular the authors
provide their analysis from the context of training a linear auto-encoder
with rectified linear units on a whitened data. They use a toy dataset in
3 dimensions (gaussian and mixture of gaussian) to conduct the analysis.
They loosely test the hypothesis obtained from the toy datasets on the
MNIST data.
Though the paper starts with a lot of promise, unfortunately it fails to
deliver on what was promised. There is nothing in the paper (no new
idea or insight) that is either not already known, or fairly straightforward
to see in the case of linear auto-encoders trained using a rectified
linear thresholding unit. Furthermore there are a number of flaws in
the paper. For instance, the analysis of section 3.1 seems to be a bit
mis-leading. By definition if one fixes the weight vector w to [1,0] there
is no way that the sigmoid can distinguish between x's which are
greater than S for some S. However with the weight vector taking
arbitrary continuous values, that may not be the case. Besides, the
purpose of the encoder is to learn a representation, which can best
represent the input, and coupled with the decoder can reconstruct it.
The encoder learning an identity function (as is argued in the paper) is not
of much use. Finally, the whole analysis of section 3 was based on a
linear auto-encoder, whose encoder-decoder weights were tied. However
in the case of MNIST the authors show the filters learnt from an untied
weight auto-encoder. There seems to be some disconnect there.
In short the paper does not offer any novel insight or idea with respect
to learning representation using auto-encoders with rectified linear
thresholding function. Various gaps in the analysis also makes it a not
very high quality work. |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions. | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | kH1XHWcuGjDuU | review | 1,361,946,600,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 9c3f"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: This paper analyzes properties of rectified linear autoencoder
networks.
In particular, the paper shows that rectified linear networks are
similar to linear networks (ICA). The major difference is the
nolinearity ('switching') that allows the decoder to select a subset
of features. Such selection can be viewed as a mixture of ICA models.
The paper visualizes the hyperplanes learned for a 3D dataset and
shows that the results are sensible (i.e., the learned hyperplanes
capture the components that allow the reconstruction of the data).
Some comments:
- On the positive side, I think that the paper makes a interesting attempt to understand properties of nonlinear networks, which is typically hard because of the nonlinearities. The choice of the activation function (rectified linear) makes such analysis possible.
- I understand that the paper is mainly an analysis paper. But I feel
that it seems to miss a strong key thesis. It would be more interesting that the analysis reveals surprising/unexpected results.
- The analyses do not seem particularly deep nor surprising. And I do
not find that they can advance our field in some way. I wonder if it's possible to make the analysis more constructive so that we can improve our algorithms. Or at least the analyses can reveal certain surprising properties of unsupervised algorithms.
- It's unclear the motivation behind the use of rectified linear
activation function for analysis.
- The paper touches a little bit on whitening. I find the section on
this topic is unsatisfying. It would be good to analyse the role of whitening in greater details here too (as claimed by abstract and introduction).
- The experiments show that it's possible to learn penstrokes and
Gabor filters from natural images. But I think this is no longer
novel. And that there are very few practical implications of
this work. |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to further unify the world of sparse linear coding models. We provide an intuitive interpretation of the behavior of these coding models and demonstrate this intuition using small, artificial datasets with known distributions. | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | oozAQe0eAnQ1w | review | 1,362,360,840,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer ab3b"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-autoencoder-like ICA learning criterion of Le et al (NIPS 2011).
The first 3 have in common that, for each example, they yield a subset of non-zero (active) hidden units, that result from a simple thresholding. And it is argued that the training objective thus restricted to that subset corresponds to that of Le et al's ICA. Many 2D and 3D graphics with Gaussian data try to convey a geometric intuition of what is going on.
I find rather obvious that these methods switch on a different linear basis for each example. The specific conection highlighted with Le et al's ICA work is more interesting, but it only applies if L1 feature sparsity regularization is employed in addition to the rectified linear activation function.
At the present stage, my impression is that this paper mainly reflect on the authors' maturing perception of links between the various methods, together with their building of an intuitive geometric understanding of how they work. But it is not yet ripe and its take home message not clear.
While its reflections are not without basis or potential interest they are not currently sufficiently formally exposed and read like a set of loosely bundled observations. I think the paper could greatly benefit from a more streamlined central thesis and message with supporting arguments.
The main empirical finding from the small experiments in this paper seems to be that the training criterion tends to yield pairs of opposed (negated) feature vectors. What we should conclude from this is however unclear.
The graphics are too many. Several seem redundant and are not particularly enlightening for our understanding. Also the use of many Gaussian data examples seems a poor choice to highlight or analyse the switching behavior of these 'switched linear coding' techniques (what does switching buy us if a PCA can capture about all there is about the structure?). |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity. | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | EW9REhyYQcESw | review | 1,362,202,140,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 1024"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature where accuracy increases with dictionary size but also saturates.
Pros:
- Demonstrates limitations of shallow models and analytically justifies the use of deeper models. |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity. | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | oxSZoe2BGRoB6 | review | 1,362,196,320,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 998c"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space defined by an off-the-shelf feature extraction system. In such a system, a bound is identified for the error in representing the 'full' dictionary composed of all data points by a Nystrom approximated version (i.e., represented by subsampling the data points randomly). The bound is then extended to show that the approximate kernel matrix obtained using the Nystrom-sampled dictionary is close to the true kernel matrix, and it is argued that the quality of the approximation is a reasonable proxy for the classification error we can expect after training. It is shown that this approximation model qualitatively predicts the monotonic rise in accuracy of feature extraction with larger dictionaries and saturation of performance in experiments.
This is a short paper, but the main idea and analysis are interesting. It is nice to have some theoretical machinery to talk about the empirical finding of rising, saturating performance. In some places I think more detail could have been useful.
One undiscussed point is the fact that many dictionary-learning methods do more than populate the dictionary with exemplars so it's possible that a 'learning' method might do substantially better (perhaps reaching top performance much sooner). This doesn't appear to be terribly important in low-dimensional spaces where sampling strategies work about as well as learning, but could be critical for high-dimensional spaces (where sampling might asymptote much more slowly than learning). It seems worth explaining the limitations of this analysis and how it relates to learning.
A few other questions / comments:
The calibration of constants for the bound in the experiments was not clear to me. How is the mapping from the bound (Eq. 2) to classification accuracy actually done?
The empirical validation of the lower bound relies on a calibration procedure that, as I understand it, effectively ends up rescaling a fixed-shape curve to fit observed trend in accuracy on the real problem. As a result, it seems like we could come up with a 'nonsense' bound that happened to have such a shape and then make a similar empirical claim. Is there a way to extend the analysis to rule this out? Or perhaps I misunderstand the origin of the shape of this curve.
Pros:
(1) A novel view of feature extraction that appears to yield a reasonable explanation for the widely observed performance curves of these methods is presented. I don't know how much profit this view might yield, but perhaps that will be made clear by the 'overshooting' method foreshadowed in the conclusion.
(2) A pleasingly short read adequate to cover the main idea. (Though a few more details might be nice.)
Cons:
(1) How this bound relates to the more common case of 'trained' dictionaries is unclear.
(2) The empirical validation shows the basic relationship qualitatively, but it is possible that this does not adequately validate the theoretical ideas and their connection to the observed phenomenon. |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity. | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | 8sJwMe5ZwE8uz | review | 1,363,264,440,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"Oriol Vinyals, Yangqing Jia, Trevor Darrell"
] | ICLR.cc/2013/conference | 2013 | review: We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve performance in absolute terms, but that is an orthogonal issue to our main results. Nonetheless, we think this is a valuable observation, and will include a discussion of these points in the final version of this paper.
The relationship between a kernel error bound and classification accuracy is discussed in more detail in [Cortes et al. On the Impact of Kernel Approximation on Learning Accuracy. AISTATS 2010]. The main result is that the bounds are proportional, verifying our empirical claims. We will add this reference to the paper.
Regarding the comment on fitting the shape of the curve, we are only using the first two points to fit the 'constants' given in the bound, so the fact that it extrapolates well in many tasks gives us confidence that the bound is accurate. |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | RzSh7m1KhlzKg | review | 1,363,574,460,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns:
A common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I compared the performance of the diagonalized Newton algorithm (DNA) with the wide-spread multiplicative updates (MU) exactly because it is the most common baseline and almost every algorithm has been compared against it. As you suggested, I did run comparison tests and I will present the results here. I need to find a method to post some figures to make the point clear. First, I compared against the Cyclic Coordinate Descent (CCD) by Hsieh & Dhillon using the software they provide on their website. I ran the synthetic 1000x500 example (rank 10). The KLD as a function of iteration number for DNA and CCD are very close (I did not find a way to post a plot on this forum). However, in terms of CPU (ran on the machine I mention in the paper) DNA is a lot faster with about 200ms per iteration for CCD and about 50ms for DNA. Note that CCD is completely implemented in C++ (embedded in a mex-file) while DNA is implemented in matlab (with one routine in mex - see the download page mentioned in the paper). As for the comparison with SBCD (scalar block coordinate descent), I also ran their code on the same example, but unfortunately, one of the matrix factors is projected to an all-zero matrix in the first iteration. I have not found the cause yet.
What definitely needs investigation is that I observe CCD to be 4 times slower than DNA. Using my implementation for MU, 1200 MU iterations are actually as fast as the 100 CCD iteration. (My matlab MU implementation is 10 times faster than the one provided by Hsieh&Dhillon). For these reasons, I am not too keen on quickly including a comparison in terms of CPU time (which is really the bottom line), as implementation issues seem not so trivial. Even more so for a comparison on a GPU, where the picture could be different from the CPU for the cyclic updates in CCD. A thorough comparison on these two architectures seems like a substantial amount of future work. But I hope the data above data convince you the present paper and public code are significant work.
Reply to Anonymous 57f3
' it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)'
>> There are practical problems where NMF takes hours, e.g. the problems of [6], which is essentially learning a speech recognizer model from data. We are now applying NMF-based speech recognition in learning paradigms that learn from user interaction examples. In such cases, you want to wait seconds, not minutes. Also, there is an increased interest in 'large-sccale NMF problems'.
'Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? '
>> Notice that the second and third term in the expression for KLD (Eq. 1) are normalization terms such that we don't require V or Z to sum to unity. This very common in the NMF literature, and was motivated in a.o. [1]. KLD is appropriate if the data follow a (mixture of) Poisson distribution. While this is realistic for counts data (like in the Newsgroup corpus), the KLD is also applied on Fourier spectra, e.g. for speaker separation or speech enhancement, with success. Imho, the relevance of KLD does not need to be motivated in a paper on algorithms, see also [18] and [20] ( numbering in the new paper).
'I understand that this is a formulation used in previous work ([1]), but it should be briefly explained. '
>> Added a sentence about the Poisson hypothesis after Eq. 1.
'You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different? '
>> Reformulated. [17] (now [18]) uses a totally different motivation, but also involves the second order derivatives, like a Newton method.
'Has a diagonal Newton-type approach ever been used for the squared error objective? '
>> A reference is given now. Note however that KLD behaves substantially different.
'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'
'the variables required to compute' -> 'the quantities required to compute'
>> corrected
You should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence?
>> Yes. A lot better. Corrected.
'Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal. '
>> yes. I initially tried to control the damping by adding lambda*I to the Hessian, where lambda is decreased on success and increased if the KLD increases. I found it difficult to find a setting that worked well on a variety of problems.
I would recommend using ' op' to denote matrix transposition instead of what you are doing. Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done.
>> That's indeed what I meant to say. 'is' => 'can be'
You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness.
>> I meant this holds for any solution of (5). This is corrected.
And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!).
>> A Newton update may yield a guess that does not satisfy this (weak) necessary condition. We can satisfy this condition easily with the renormalization (10), which is reflected in steps 16 and 29.
I don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct. 'floor resp. ceiling'??
>> 'Therefore' => 'To respect the nonnegativity and to avoid the singularity”
Reply to Anonymous 4322
See comparison described above.
I added more about the differences with the prior work you mention.
Reply to Anonymous 482c
See also comparison data detailed above.
You are right there is a lot of generic work on Hessian preconditioning. I refer to papers that work on damping and line search in the context of NMF ([10], [11], [12], [14] ...). Diagonalization is only related in the sense that it ensures the Hessian to be positive definite (not in general, but here is does). |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | FFkZF49pZx-pS | review | 1,362,210,360,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 4322"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Summary:
The paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal approximation of Newton's method for solving the KKT conditions of the NMF optimization problem. This approximation results in a multiplicative update that is computationally light. Since the objective function might increase under the Newton updates, the author proposes to simultaneously compute both multiplicative and Newton updates and choose the one that produces the largest descent. The algorithm is tested on several datasets, generally producing improvements in both number of iterations and computational time with respect to the standard multiplicative updates.
I believe that the paper is well written. It proposes an efficient optimization algorithm for solving a problem that is not novel but very important in many applications. The author should highlight the strengths of the proposed approach and the differences with recent works presented in the literature.
Pros.:
- the paper addresses an important problem in matrix factorization,
extensively used in audio processing applications
- the experimental results show that the method is more efficient than the multiplicative algorithm (which is the most widely used optimization tool), without significantly increasing the algorithmic complexity
Cons:
- experimental comparisons against related approaches is missing
- this approach seems limited to only work for the Kullback-Leiber
divergence as fitting cost.
General comments:
I believe that the paper lacks of experimental comparisons with other accelerated optimization schemes for solving the same problem. In particular, I believe that the author should include comparisons with [17] and the work,
C.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD, pages 1064–1072, 2011.
which should also be cited.
As the author points out, the approach in [17] is very similar to the one proposed in this paper (they have code available online). The work by Hsieh and Dhillon is also very related to this paper. They propose a coordinate descent method using Newton's method to solve the individual one-variable sub-problems. More details on the differences with these two works should be provided in Section 1.
The experimental setting itself seems convincing. Figures 2 and 3 are never cited in the paper. |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | MqwZf2jPZCJ-n | review | 1,363,744,920,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: First: sorry for the multiple postings. Browser acting weird. Can't remove them ...
Update: I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv):
1) you have to be careful with initialization. If the estimates for W or H are too large, E = A - WH could potentially contain too many zeros in line 3 and the update maps H to all zeros. Solution: I first perform a multiplicative update on W and H so you have reasonably scaled estimates.
2) line 16 is wrongly implemented in the publicly available ffhals5.m
I reran the comparison (different machine though - the one I used before was fully loaded):
1) CCD (ref [17]) - the c++ code compiled to a matlab mex file as downloaded from the author's website and following their instructions.
2) DNA - fully implemented in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/
3) SBCD (ref [18]) - code fully in matlab with mods above
4) MU (multiplicative updates) - implementation fully in matlab as available from http://www.esat.kuleuven.be/psi/spraak/downloads/
The KLD as a function of the iteration for the rank-10 random 1000x500 matrix is shown in https://dl.dropbox.com/u/915791/iteration.pdf.
We observe that SBCD takes a good start but then slows down. DNA is best after the 5th iteration.
The KLD as a function of CPU time is shown in https://dl.dropbox.com/u/915791/time.pdf
DNA is the clear winner, followed by MU which beats both SBCD and CCD. This may be surprising, but as I mentioned earlier, there are some implementation issues. CCD is a single-thread implementation, while matlab is multi-threaded and works in parrallel. However, the cyclic updates in CCD are not very suitable for parallelization. The SBCD needs reimplementation, honestly.
In summary, DNA does compare favourably to the state-of-the-art, but I don't really feel comfortable about including such a comparison in a scientific paper if there is such a dominant effect of programming style/skills on the result. |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | oo1KoBhzu3CGs | review | 1,362,192,540,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 57f3"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional updating scheme ('multiplicative updates' from [1]) in the initial phase of optimization, with a diagonal Newton approach which is automatically switched to when it will help. This switching is accomplished by always computing both updates and taking whichever is best, which will typically be MU at the start and the more rapidly converging (but less stable) Newton method towards the end. Additionally, the diagonal Newton updates are made more stable using a few tricks, some of which are standard and some of which may not be. It is found that this can provide speed-ups which may be mild or significant, depending on the application, versus a standard approach which only uses multiplicative updates. As pointed out by the authors, Newton-type methods have been explored for non-negative matrix factorization before, but not for this particularly objective with a diagonal approximation (except perhaps [17]?).
The writing is rough in a few places but okay overall. The experimental results seem satisfactory compared to the classical algorithm from [1], although comparisons to other potentially more recent approaches is conspicuously absent. I'm not an experiment on matrix factorization or these particular datasets so it's hard for me to independently judge if these results are competitive with state of the art methods.
The paper doesn't seem particularly novel to me, but matrix factorization isn't a topic I find particularly interesting, so this probably biases me against the paper somewhat.
Pros:
- reasonably well presented
- empirical results seem okay
Cons:
- comparisons to more recent approaches is lacking
- it's not clear that matrix factorization is a problem for which optimization speed is a primary concern (all of the experiments in the paper terminate after only a few minutes)
- writing is rough in a few places
Detailed comments:
Using a KL-divergence objective seems strange to me since there aren't any distributions involved, just matrices, whose entries, while positive, need not sum to 1 along any row or column. Are the entries of the matrices supposed to represent probabilities? I understand that this is a formulation used in previous work ([1]), but it should be briefly explained.
You should explain the connection between your work and [17] more carefully. Exactly how is it similar/different?
Has a diagonal Newton-type approach ever been used for the squared error objective?
'the smallest cost' -> 'leading to the greatest reduction in d_{KL}(V,Z)'
'the variables required to compute' -> 'the quantities required to compute'
You should avoid using two meanings of the word 'regularized' as this can lead to confusion. Maybe 'damped' would work better to refer to the modifications made to the Newton updates that prevent divergence?
Have you compared to using damped/'regularized' Newton updates instead of your method of selecting the best between the Newton and MU updates? In my experience, damping, along the lines of the LM algorithm or something similar, can help a great deal.
I would recommend using ' op' to denote matrix transposition instead of what you are doing.
Section 2 needs to be reorganized. It's hard for me to follow what you are trying to say here. First, you introduce some regularization terms. Then, you derive a particular fixed-point update scheme. When you say 'Minimizing [...] is achieved by alternative updates...' surely you mean that this is just one particular way it might be done. Also, are these derivation prior work (e.g. from [1])? If so, it should be stated.
It's hard to follow the derivations in this section. You say you are applying the KKT conditions, but your derivation is strange and you seem to skip a bunch of steps and neglect to use explicit KKT multipliers (although the result seems correct based on my independent derivation). But when you say: 'If h_r = 0, the partial derivative is positive. Hence the product of h_r and the partial derivative is always zero', I don't see how this is a correct logical implication. Rather, the product is zero for any solution satisfying complementary slackness. And I don't understand why it is particularly important that the sum over equation (6) is zero (which is how the normalization in eqn 10 is justified). Surely this is only a (weak) necessary condition, but not a sufficient one, for a valid optimal solution. Or is there some reason why this is sufficient (if so, please state it in the paper!).
I don't understand how the sentence on line 122 'Therefor...' is not a valid logical implication. Did you actually mean to use the word 'therefor' here? The lower bound is, however, correct.
'floor resp. ceiling'?? |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | aplzZcXNokptc | review | 1,363,615,980,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, “Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,” in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Diego, CA, USA, August 2011) using their software:
the plots of the KLD as a function of iteration number and cpu time are located at https://dl.dropbox.com/u/915791/iteration.pdf and https://dl.dropbox.com/u/915791/time.pdf
The data is the synthetic 1000x500 random matrix of rank 10. They show DNA has comparable convergence behaviour and the implementation is faster, despite it's matlab (DNA) vs. c++ (CCD). |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative entries. In this paper, the approximation quality is measured by the Kullback-Leibler divergence between the data and its low-rank reconstruction. The existence of the simple multiplicative update (MU) algorithm for computing the matrix factors has contributed to the success of NMF. Despite the availability of algorithms showing faster convergence, MU remains popular due to its simplicity. In this paper, a diagonalized Newton algorithm (DNA) is proposed showing faster convergence while the implementation remains simple and suitable for high-rank problems. The DNA algorithm is applied to various publicly available data sets, showing a substantial speed-up on modern hardware. | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | EW5mE9upmnWp1 | review | 1,362,382,860,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 482c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Overview:
This paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall-back mechanism to MU. At a minimal computational overhead, this is shown to be effective in a number of experiments.
The paper is well-written, the experimental validation is convincing, and the author provides detailed pseudocode and a matlab implementation.
Comments:
There is a large body of related work outside of the NMF field that considers diagonal Hessian preconditioning of updates, going back (at least) as early as Becker & LeCun in 1988.
Switching between EM and Newton update (using whichever is best, element-wise) is an interesting alternative to more classical forms of line search: it may be worth doing a more detailed comparison to such established techniques.
I would appreciate a discussion of the potential of extending the idea to non KL-divergence costs. |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images. | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | UgMKgxnHDugHr | review | 1,362,080,640,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer cfb0"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: *A brief summary of the paper's contributions, in the context of prior work*
This paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen category. If not, the model relies on a regular, state-of-the art supervised classifier to assign the image to known classes. Otherwise, it attempts to identify what this object is, based on a comparison between the image and each unseen class, in a learned joint image/class representation space. The method relies on pre-trained word representations, extracted from unlabelled text, to represent the classes. Experiments evaluate the compromise between classification accuracy on the seen classes and the unseen classes, as a threshold for identifying an unseen class is varied.
*An assessment of novelty and quality*
This paper goes beyond the current work on zero-shot learning in 2 ways. First, it shows that very good classification of certain pairs of unseen classes can be achieved based on learned (as opposed to hand designed) representations for these classes. I find this pretty impressive.
The second contribution is in a method for dealing with seen and unseen classes, based on the idea that unseen classes are outliers. I've seen little work attacking directly this issue. Unfortunately, I'm not super impressed with the results: having to drop from 80% to 70% to obtain between 15% and 30% accuracy on unseen classes (and only for certain pairs) is a bit disappointing. But it's a decent first step. Plus, the proposed model is overall fairly simple, and zero-shot learning is quite challenging, so in fact it's perhaps surprising that a simple approach doesn't do worse.
Finally, I find the paper reads well and is quite clear in its methodology.
I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'. This sentence suggests there is a theoretical contribution to this work, which I don't see. So I would remove that sentence.
Also, the second paragraph of section 6 is incomplete.
*A list of pros and cons (reasons to accept/reject)*
The pros are:
- attacks an important, very hard problem
- goes significantly beyond the current literature on zero-shot learning
- some of the results are pretty impressive
The cons are:
- model is a bit simple and builds quite a bit on previous work on image classification [6] and unsupervised learning of word representation [15] (but frankly, that's really not such a big deal) |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images. | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | 88s34zXWw20My | review | 1,362,001,800,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer 310e"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: summary:
the paper presents a framework to learn to classify images that can come either from known
or unknown classes. This is done by first mapping both images and classes into a joint embedding
space. Furthermore, the probability of an image being of an unknown class is estimated using
a mixture of Gaussians. Experiments on CIFAR-10 show how performance vary depending on the threshold use to
determine if an image is of a known class or not.
review:
- The idea of learning a joint embedding of images and classes is not new, but is nicely explained
in the paper.
- the authors relate to other works on zero-shot learning. I have not seen references to similarity learning,
which can be used to say if two images are of the same class. These can obviously be used to determine
if an image is of a known class or not, without having seen any image of the class.
- The proposed approach to estimate the probability that an image is of a known class or not is based
on a mixture of Gaussians, where one Gaussian is estimated for each known class where the mean is
the embedding vector of the class and the standard deviation is estimated on the training samples of
that class. I have a few concerns with this:
* I wonder if the standard deviation will not be biased (small) since it is estimated on the training
samples. How important is that?
* I wonder if the threshold does not depend on things like the complexity of the class and the number
of training examples of the class. In general, I am not convinced that a single threshold can be used
to estimate if a new image is of a new class. I agree it might work for a small number of well
separate classes (like CIFAR-10), but I doubt it would work for problems with thousands of classes
which obviously are more interconnected to each other.
- I did not understand what to do when one decides that an image is of an unknown class. How should it
be labeled in that case?
- I did not understand why one needs to learn a separate classifier for the known classes, instead of
just using the distance to the known classes in the embedding space. |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images. | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | ddIxYp60xFd0m | review | 1,363,754,820,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?
* I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?
- We tried fitting a general covariance matrix and it decreases performance.
* I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.
- It might be and we notice that different thresholds should be selected via cross validation.
In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.
- Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.
I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?
- Using the distances to the word vectors of the unknown classes.
I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.
reply.
- The discriminative classifiers have much higher accuracy than the simple distances for known classes.
I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.
- Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon). |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images. | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | SSiPd5Rr9bdXm | review | 1,363,754,760,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference. Would you use the images of other classes to train classification similarity learning? These would have a different distribution than the completely unseen images from the zero shot classes? In other words, what would the non-similar objects be?
* I wonder if the standard deviation will not be biased (small) since it is estimated on the training samples. How important is that?
- We tried fitting a general covariance matrix and it decreases performance.
* I wonder if the threshold does not depend on things like the complexity of the class and the number of training examples of the class.
- It might be and we notice that different thresholds should be selected via cross validation.
In general, I am not convinced that a single threshold can be used to estimate if a new image is of a new class.
- Right, we found a better performance by fitting different thresholds for each class. We will include this in follow-up paper submissions.
I did not understand what to do when one decides that an image is of an unknown class. How should it be labeled in that case?
- Using the distances to the word vectors of the unknown classes.
I did not understand why one needs to learn a separate classifier for the known classes, instead of just using the distance to the known classes in the embedding space.
reply.
- The discriminative classifiers have much higher accuracy than the simple distances for known classes.
I do wonder why the authors claim that they 'further extend [the] theoretical analysis [of Palatucci et a.] ... and weaken their strong assumptions'.
- Thanks, we will take this and the other typo out and uploaded a new version to arxiv (which should be available soon). |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | eG1mGYviVwE-r | comment | 1,363,730,760,000 | Av10rQ9sBlhsf | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Okay, thanks. We understand your viewpoint. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | EHF-pZ3qwbnAT | review | 1,362,609,900,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer a9e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about the layer below. The authors also study the computational complexity of this inference in various settings.
I must say I very much like this paper. It proposes a model which combines fast and approximate inference (approximate in the sense that the global description of the scene lacks details) with a slower and exact inference (in the sense that it allows exact inference of the parts of the model). Since I am not familiar with the literature, I cannot however judge the novelty of the work.
Pros:
- model which attractively combines inference at the top level with inference at the lower levels
- the analysis of the computational complexity for varying number of parts and objects is interesting
- the work is very conjectural but I'd rather see it acknowledged than hidden under toy experiments.
Cons: |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | sPw_squDz1sCV | review | 1,363,536,060,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer c1e8,
Please read the authors' responses to your review. Do they change your evaluation of the paper? |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Rny5iXEwhGnYN | comment | 1,362,095,760,000 | p7BE8U1NHl8Tr | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done.
The reviewer says that this result is not very novel and mentions analogies to complexity gain of large convolutional networks. This is an interesting direction to explore, but we are unaware of any mathematical analysis of convolutional networks that addresses these issues (please refer us to any papers that we may have missed). Since our analysis draws heavily on properties of compositional models -- explicit parts, executive summary, etc -- we are not sure how our analysis can be applied directly to convolutional networks. Certain aspects of our analysis also are novel to us -- e.g., the sharing of parts, the parallelization.
In summary, although it is plausible that compositional models and convolutional nets have good scaling properties, we are unaware of any other mathematical results demonstrating this. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | O3uWBm_J8IOlG | comment | 1,363,731,300,000 | EHF-pZ3qwbnAT | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Av10rQ9sBlhsf | comment | 1,363,643,940,000 | Rny5iXEwhGnYN | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | reply: Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'.
The analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you have to organize the search algorithm in a manner that leverages this sharing.
But I still like your paper because it gives 'a self-contained description of a sophisticated and conceptually sound object recognition system'. Although my personal vantage point makes the complexity result less surprising, the overall achievement is non trivial and absolutely worth publishing. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | oCzZPts6ZYo6d | review | 1,362,211,680,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer 915e"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing.
The intuition behind these models is that objects are composed of parts and that each of these parts can appear in many different objects;
with sensible parallels (not mentioned explicitly by the authors) to typical sampling sets in image compression and to renormalization concepts in physics via model high-level executive summaries.
The construction of hierarchical part dictionaries is an important and in my appreciation challenging prerequisite, but this is not the subject of the paper.
The authors discuss an approach for object detection and object-position inference exploiting part sharing and dynamic programming,
and evaluate its serial and parallel complexity. The paper gathers interesting concepts and presents intuitively-sound theoretical results that could be of interest to the ICLR community. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | p7BE8U1NHl8Tr | review | 1,361,997,540,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially describing how parts are composed. A distinctive feature of this model is the ability to support 'part sharing' because the same part model can be used by multiple objects and/or in various points of the object hierarchical description. Recognition is then achieved with a Viterbi search. The central point of the paper is to show how part sharing provides opportunities to reduce the computational complexity of the search because computations can be reused.
This is analogous to the complexity gain of a large convolutional network over a sliding window recognizer of similar architecture. Although I am not surprised by this result, and although I do not see it as very novel, this paper gives a self-contained description of a sophisticated and conceptually sound object recognition system. Stressing the complexity reduction associated with part sharing is smart because the search complexity became a central issue in computer vision. On the other hand, the unsupervised learning of the part decomposition is not described in this paper (reference [19]) and could have been relevant to ICLR. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary description. We describe inference and learning algorithms for these models. We analyze the complexity of this model in terms of computation time (for serial computers) and numbers of nodes (e.g., 'neurons') for parallel computers. In particular, we compute the complexity gains by part sharing and its dependence on how the dictionary scales with the level of the hierarchy. We explore three regimes of scaling behavior where the dictionary size (i) increases exponentially with the level, (ii) is determined by an unsupervised compositional learning algorithm applied to real data, (iii) decreases exponentially with scale. This analysis shows that in some regimes the use of shared parts enables algorithms which can perform inference in time linear in the number of levels for an exponential number of objects. In other regimes part sharing has little advantage for serial computers but can give linear processing on parallel computers. | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | zV1YApahdwAIu | comment | 1,362,352,080,000 | oCzZPts6ZYo6d | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore. |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | qO9gWZZ1gfqhl | review | 1,362,163,380,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 777f"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results.
Question: how does that compare to multi-scale max pooling CNN for a previous award-winning application, namely, segmentation of neuronal membranes (Ciresan et al, NIPS 2012)? |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | tG4Zt9xaZ8G5D | comment | 1,363,298,100,000 | Ub0AUfEOKkRO1 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the total number of correctly classified pixels as a particular class, on the number of pixels belonging to this class in the dataset.
For the pixel-wise accuracy, we now give the standard deviation in Table 1, as well as the median. As the two variances are equal using depth or not, we computed the statistical significance using a two sample t-test, that results in a t statistic equal to 1.54, which is far from the mean performance of 52.2 and thus we can consider that the two reported means are statistically significant.
About the class-by class improvements displayed in Table 1, we discuss the fact that objects having a constant appearance of depth are in general more inclined to take benefit from depth information. As the major part of the scenes contains categories that respect this property, the improvements achieved using depth involve a smaller number of categories, but a larger volume of data.
To strengthen our comparison of the two networks using or not depth information, we now display the results obtained using only the multiscale network without depth information in Figure 2.
We hope that the changes that we made in the paper (which should be updated within the next 24 hours) answer your concerns. |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | OOB_F66xrPKGA | comment | 1,363,297,980,000 | 2-VeRGGdvD-58 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments.
The missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman’s web page. We added the reference to the paper.
In the paper, we made the observation that the classes for which depth fails to outperform the RGB model are the classes of object for which the depth map does not vary too much. We now stress out better this observation with the addition of some depth maps at Figure 2.
The question you are raising about whether or not the depth is always useful, or if there could be better ways to leverage depth data is a very good question, and at the moment is still un-answered. The current RGBD multiscale network is the best way we found to learn features using depth, now maybe we could improve the system by introducing an appropriate contrast normalization of the depth map, or maybe we could combine the learned features using RGB and the learned features using RGBD… |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | Ub0AUfEOKkRO1 | review | 1,362,368,040,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 5193"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension now that RGBD sensors are readily available.
The incremental value of the depth channel is not entirely clear from this paper. In principle, the depth information should be valuable. However, Table 1 shows that for the majority of object types, the network that ignores depth is actually more accurate. Although the averages at the bottom of Table 1 show that depth-enhanced segmentation is slightly better, I suspect that if those averages included error bars (and they should), the difference would be insignificant. In fact, all the accuracies in Table 1 should have error bars on them. The comparisons with the work of Silberman et al. are more favorable to the proposed model, but again, the comparison would be strengthened by discussion of statistical confidence.
Qualitatively, I would have liked to see the ouput from the convolutional network of Farabet et al. without the depth channel, as a point of comparison in Figures 2 and 3. Without that point of comparison, Figures 2 and 3 are difficult to interpret as supporting evidence for the model using depth.
Pro(s)
- establishes baseline RGBD results with convolutional networks
Con(s)
- quantitative results lack confidence intervals
- qualitative results missing important comparison to non-rgbd network |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | VVbCVyTLqczWn | comment | 1,363,297,440,000 | qO9gWZZ1gfqhl | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input to the network, they artificially blur the input's content, radially, and use non-uniform sampling to connect the network to it. The major advantage of using a pyramid is that the whole pyramid can be applied convolutionally, to larger input sizes. Once the model is trained, it must be applied as a sliding window to classify each pixel in the input. Using their method, which requires a radial blur centered on each pixel, the model cannot be applied convolutionally. This is a major difference, which dramatically impacts test time.
Note: Ciseran's 2012 NIPS paper appeared after our first paper (ICML 2012) on the subject. |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA. | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | 2-VeRGGdvD-58 | review | 1,362,213,660,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 03ba"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that for most segmentation classes providing depth information to the conv net increases performance.
The model simply adds depth as a separate channel to the existing RGB channels in a conv net. Depth has some unique properties e.g. infinity / missing values depending on the sensor. It would be nice to see some consideration or experiments on how to properly integrate depth data into the existing model.
The experiments demonstrate that a conv net using depth information is competitive on the datasets evaluated. However, it is surprising that the model leveraging depth is not better in all cases. Discussion on where the RGB-D model fails to outperform the RGB only model would be a great contribution to add. This is especially apparent in table 1. Does this suggest that depth isn't always useful, or that there could be better ways to leverage depth data?
Minor notes:
'modalityies' misspelled on page 1
Overall:
- A straightforward application of conv nets to RGB-D data, yielding fairly good results
- More discussion on why depth fails to improve performance compared to an RGB only model would strengthen the experimental findings |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive. | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | LkyqLtotdQLG4 | review | 1,362,012,600,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 9212"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher information matrix L with examples sampled from the model distribution using a MCMC approach with multiple chains. The gradient g is estimated from minibatches, and the weight update x is obtained by solving Lx=g with an efficient truncated algorithm. Doing so naively would be very costly because the matrix L is large. The trick is to express L as the covariance of the Jacobian S with respect to the model distribution and take advantage of the linear nature of the sample average to estimate the product Lw in a manner than only requires the storage of the Jacobien for each sample.
This is a neat idea. The empirical results are preliminary but show promise. The proposed algorithm requires less iterations but more wall-clock time than SML. Whether this is due to intrinsic properties of the algorithm or to deficiencies of the current implementation is not clear. |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive. | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | o5qvoxIkjTokQ | review | 1,362,294,960,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 7e2e"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is particularly useful because, indeed, many of the papers on information geometry are hard to follow. The derivations are also correct and sound. The derivations in the appendix are classical statistics results, but their addition is likely to improve readability of the paper.
The experiments show that the natural gradient approach does better than stochastic maximum likelihood when plotting estimated likelihood against epochs. However, per unit computation, the stochastic maximum likelihood method still does better.
I was not able to understand remark 4 about mini-batches. Why are more parallel chains needed? Why not simply use a single chain but have longer memory. I strongly think this part of the paper could be improved if the authors write down the pseudo-code for their algorithm. Another suggestion is to use automatic algorithm configuration to find the optimal hyper-parameters for each method, given that they are so close.
The trade-offs of second order versus first order optimization methods are well known in the deterministic case. There is is also some theoretical guidance for the stochastic case. I encourage the authors to look at the following papers for this:
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets. N. Le Roux, M. Schmidt, F. Bach. NIPS, 2012.
Hybrid Deterministic-Stochastic Methods for Data Fitting.
M. Friedlander, M. Schmidt. SISC, 2012.
'On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning' R. Byrd, G. Chin and W. Neveitt, J. Nocedal.
SIAM J. on Optimization, vol 21, issue 3, pages 977-995 (2011).
'Sample Size Selection in Optimization Methods for Machine Learning'
R. Byrd, G. Chin, J. Nocedal and Y. Wu. to appear in Mathematical Programming B (2012).
In practical terms, given that the methods are so close, how does the choice of implementation (GPUs, multi-cores, single machine) affect the comparison? Also, how data dependent are the results. I would be nice to gain a deeper understanding of the conditions under which the natural gradient might or might not work better than stochastic maximum likelihood when training Boltzmann machines.
Finally, I would like to point out a few typos to assist in improving the paper:
Page 1: litterature should be literature
Section 2.2 cte should be const for consistency.
Section 3: Avoid using x instead of grad_N in the linear equation for Lx=E(.) This causes overloading. For consistency with the previous section, please use grad_N instead.
Section 4: Add a space between MNIST and [7].
Appendix 5.1: State that the expectation is with respect to p_{ heta}(x).
Appendix 5.2: The expectation with respect to q_ heta should be with respect to p_{ heta}(x) to ensure consistency of notation, and correctness in this case.
References: References [8] and [9] appear to be duplicates of the same paper by J. Martens. |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive. | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | dt6KtywBaEvBC | review | 1,362,379,800,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 77a7"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is used to train a DBM, faster than the baseline algorithm in terms of epochs needed, but at the cost of a computational slowdown (about a factor 30). The paper is well-written, the algorithm is novel, although not fundamentally so.
In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick. Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).
Other comments:
* Remove the phrase 'first principles', it is not applicable here.
* Is there a good reason to limit section 2.1 to a discrete and bounded domain X?
* I'm not a big fan of the naming a method whose essential ingredient is a metric 'Metric-free' (I know Martens did the same, but it's even less appropriate here).
* I doubt the derivation in appendix 5.1 is a new result, could be omitted.
* Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.
* Results should be averaged over multiple runs, and error-bars given.
* The authors could clarify how the algorithm complexity scales with problem dimension, and where the computational bottleneck lies, to help the reader judge its promise beyond the current results.
* A pity that it took longer than 6 weeks for the promised 'next revision', I had hoped the authors might resolve some of the self-identified weaknesses in the meanwhile. |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural gradient metric $L$. This metric is shown to be the expected second derivative of the log-partition function (under the model distribution), or equivalently, the variance of the vector of partial derivatives of the energy function. We evaluate our method on the task of joint-training a 3-layer Deep Boltzmann Machine and show that MFNG does indeed have faster per-epoch convergence compared to Stochastic Maximum Likelihood with centering, though wall-clock performance is currently not competitive. | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | pC-4pGPkfMnuQ | review | 1,363,459,200,000 | OpvgONa-3WODz | [
"everyone"
] | [
"Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work.
To all reviewers:In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MFNG, and all around lower memory consumption. Unfortunately, this has delayed the submission of a new manuscript, which is still under preparation. The focus of this new revision will be on:
(1) reporting mean and standard deviations of Fig.1 across multiple seeds.
(2) a more careful use of damping and the use of annealed learning rates.
(3) results on a second dataset, and hopefully a second model family (Gaussian RBMs).
In the meantime, we have uploaded a new version which aims to clarify and provide additional technical details, where the reviewers had found it necessary. The main modifications are:
* a new algorithmic description of MFNG
* a new graph which analyzes runtime performance of the algorithm, breaking down the run-time performance between the various steps of the algorithm (sampling, gradient computation, matrix-vector product, and MinRes iterations).
The paper should appear shortly on arXiv, and can be accessed here in the meantime:
http://brainlogging.files.wordpress.com/2013/03/iclr2013_submission1.pdf
An open-source implementation of MFNG can be accessed at the following URL.
https://github.com/gdesjardins/MFNG.git
To Anonymous 7e2e:There are numerous advantages to sampling from parallel chains (with fewer Gibbs steps between samples), compared to using consecutive (or sub-sampled) samples generated by a single Markov chain. First, running multiple chains guarantees that the samples are independent. Running a single chain will no doubt result in correlated samples which will negatively impact our estimates of the gradient and the metric. Second, simulating multiple chains is an implicitly parallel process, which can be implemented efficiently on both CPU and GPU (especially so on GPU). The downside however is in increase in memory consumption.
To Anonymous 77a7:
>> In terms of motivation, the new algorithm aims to attenuate the effect of ill-conditioned Hessians, however that claim is weakened by the fact that the experiments seem to still require the centering trick.
Since ours is a natural gradient method, it attenuates the effect of ill-conditioned probability manifolds (expected hessian of log Z, under the model distribution), not ill-conditioning of the expected hessian (under the empirical distribution). It is thus possible that centering addresses the latter form of ill-conditioning. Another hypothesis is that centering provides a better initialization point, around which the natural gradient metric is better-conditioned and thus easier to invert. More experiments are required to answer these questions.
>> Also, reproducibility would be improved with pseudocode (including all tricks used) was provided in the appendix (or a link to an open-source implementation, even better).
Our source code and algorithmic description should shed some light on this issue. The only 'trick' we currently use is a fixed damping coefficient along the diagonal, to improve conditioning and speed up convergence of our solver. Alternative forms of initialization and preconditioning were not used in the experiments.
>> Is there a good reason to limit section 2.1 to a discrete and bounded domain chi?
These limitations mostly reflect our interest with Boltzmann Machines. Generalizing these results to unbounded domains (or continuous variables) remains to be investigated.
>> Hyper-parameter tuning is over a small ad-hoc set, and finally chosen values are not reported.
The results of our grid-search have been added to the caption of Figure 1. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | d6u7vbCNJV6Q8 | review | 1,361,968,020,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer ac47"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: Deep predictive coding networks
This paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal consistency information between time steps, and causes which are the latent variables inferred from the input image that attempt to explain what is in the image.
Pros:
Somewhat interesting filters are learned in the second layer of the model, though these have been shown in prior work.
Noise reduction on the toy images seems reasonable.
Cons:
The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.
The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity. The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.
Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.
It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.
Where did z come from in equation 11?
It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.
Novelty and Quality:
The paper introduces a novel extension to hierarchical sparse coding method by incorporating temporal information at each layer of the model. The poor explanation of this relatively simple idea holds the paper back slightly. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Xu4KaWxqIDurf | review | 1,363,393,200,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: The revised paper is uploaded onto arXiv. It will be announced on 18th March.
In the mean time, the paper is also made available at
https://www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 00ZvUXp_e10_E | comment | 1,363,392,660,000 | EEhwkCLtAuko7 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised.
>>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?'
We use the generalized state-space equations in Eq.1 and Eq.2 to motivate the relation between the proposed model and dynamic networks. However, please note that it is difficult to state the explicit form of F and G, since sparsity constraint even on a linear dynamical system leads to a non-linear mapping between the observations and the states.
>>> 'In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.'
Agreed. We made appropriate changes in the revised paper.
>>> The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)
We made this explicit in the revised paper. Eq.3 represents the energy function for inferring the x_t with fixed u_t and Eq.4 represents the energy function for inferring the u_t with fixed x_t. In order to be more clear, we now wrote a unified energy function (Eq. 5) from which we jointly infer both x_t and u_t.
>>> 'Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.'
Agreed, this statement is incorrect and is removed.
>>> 'Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?'
Please note that the states from the 4 overlapping patches are pooled using a non-linear function (sum of the absolute value of the state vectors). Hence, the output is no longer a linear mapping.
>>> 'Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences?'
No, we simply consider the concatenated sequence as a single video. This is made more clear in the paper. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | iiUe8HAsepist | comment | 1,363,392,180,000 | d6u7vbCNJV6Q8 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding with ISTA alternating on the states and causes. The gradient for ISTA simply has the gradients for the overall cost function, just as in sparse coding but this cost function has some extra temporal terms.'
We have made major changes to the paper to improve the presentation of the model. Hopefully the newer version will make the explanation more clear.
We would also like to emphasis that the paper makes two important contributions: (1) as you have pointed out, introduces sparse coding in dynamical models and solves it using a novel inference procedure similar to ISTA. (2) considers top-down information while performing inference in the hierarchical model.
>>> 'The noise reduction is only on toy images and it is not obvious if this is what you would also get with sparse coding using larger patch sizes and high amounts of sparsity.'
We agree with you that it would strengthen our arguments by showing denoising on large images or videos. However, to scale this model to large images require convolutional network like model. This is an on going work and we are presently developing a convolutional model for DPCN.
>>> 'The explanation of points between clusters coming from change in sequences should also appear in the clean video as well because as the text mentions the video changes as well. This is likely due to multiple objects overlapping instead and confusing the model.'
Corrected. The points between the clusters appear because we enforce temporal coherence on the causes belonging two consecutive frames at the top layer (see Section 2.4). It is not due to gradual change in the sequences, as said previously.
>>> 'Figure 1 should include the variable names because reading the text and consulting the figure is not very helpful currently.'
Corrected. Also, a new figure is added to bring more clarity.
>>> 'It is hard to reason what each of the A,B, and C is doing without a picture of what they learn on typical data. The layer 1 features seem fairly complex and noisy for the first layer of an image model which typically learns gabor-like features.'
Please see the supplementary material, section A.4 for visualization of the first layer parameters A, B and C. Also, please note that the Figure. 2 shows the visualization of the invariant matrices, B, in a two-layered network. These are obtained by taking the linear combination of Gabor like filters in C^(1) (see Figure .6) and hence, represent more complex structures. This is made more clear in the paper.
>>> 'Where did z come from in equation 11?'
Corrected. It is the Gaussian transition noise over the parameters.
>>> 'It is not at all obvious why the states should be temporally consistent and not the causes. The causes are pooled versions of the states and this should be more invariant to changes at the input between frames.'
We say the states are more temporally 'consistent' to indicate that they are more stable than sparse coding, particularly in high sparsity conditions, because they have to maintain the temporal dependencies. On the other hand, we agree with you that the causes are more invariant to changes in the input and hence, are temporally 'coherent'. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | EEhwkCLtAuko7 | review | 1,362,405,300,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 62ac"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to pursue. I like the connection to dynamical systems. The mathematical presentation is disorganized though, and it would have been nice to see some sort of benchmark or externally meaningful quantitative comparison in the experimental results.
More specific comments:
You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?
In Eq. 1 should F( x_t, u_t ) instead just be F( x_t )? Eqs. 3 and 4 suggest it should just be F( x_t ), and this would resolve points which I found confusing later in the paper.
The relationship between the energy functions in eqs. 3 and 4 is confusing to me. (this may have to do with the (non?)-dependence of F on u_t)
Section 2.3.1, 'It is easy to show that this is equivalent to finding the mode of the distribution...': You probably mean MAP not mode. Additionally this is non-obvious. It seems like this would especially not be true after marginalizing out u_t. You've never written the joint distributions over p(x_t, y_t, x_t-1), and the role of the different energy functions was unclear.
Section 3.1: In a linear mapping, how are 4 overlapping patches different from a single larger patch?
Section 3.2: Do you do anything about the discontinuities which would occur between the 100-frame sequences? |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | o1YP1AMjPx1jv | comment | 1,363,393,020,000 | Za8LX-xwgqXw5 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model'
We made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper.
>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'
We agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN.
>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity.
>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.
>>> What is z_{t} in Equation 11?'
Corrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters.
>>> 'In Section 2.2, its not clear how u_{hat} is computed. '
This is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | XTZrXGh8rENYB | comment | 1,363,393,320,000 | 3vEUvBbCrO8cu | [
"everyone"
] | [
"Rakesh Chalasani"
] | ICLR.cc/2013/conference | 2013 | reply: This is in reply to reviewer 1829, mistakenly pasted here. Please ignore. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Za8LX-xwgqXw5 | review | 1,362,498,780,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 1829"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: A brief summary of the paper's contributions, in the context of prior work.
The paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information (from high-level contextual/causal units) can modulate the states and observations in lower layers.
Experiments were conducted on a natural video dataset, and on a synthetic video dataset with moving geometric shapes. On the natural video dataset, the learned receptive fields represent edge detectors in the first layer, and higher-level concepts such as corners and junctions in the second layer. In the synthetic sequence dataset, hierarchical top-down inference is used to robustly infer about “causal” units associated with object shapes.
An assessment of novelty and quality.
This work can be viewed as a novel extension of hierarchical sparse coding to temporal data. Specifically, it is interesting to see how to incorporate dynamical systems into sparse hierarchical models (that alternate between state units and causal units), and how the model can perform bottom-up/top-down inference. The use of Nestrov’s method to approximate the non-smooth state transition terms in equation 5 is interesting.
The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model (also, see comments below).
The experimental results (identifying high-level causes from corrupted temporal data) seem quite reasonable on the synthetic dataset. However, the results are all too qualitative. The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.
Other questions and comments:
- In the beginning of the section 2.1, please define P, D, K to improve clarity.
- In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.
- What is z_{t} in Equation 11?
- In Section 2.2, it’s not clear how u_hat is computed.
A list of pros and cons (reasons to accept/reject).
Pros:
- The formulation and the proposed solution are technically interesting.
- Experimental results on a synthetic video data set provide a proof-of-concept demonstration.
Cons:
- The significance of the experiments is quite limited. There is no empirical comparison to other models on real tasks.
- Inference seems to be complicated and computationally expensive.
- Unclear presentation |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model; which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise. | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 3vEUvBbCrO8cu | review | 1,363,392,960,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model'
We made some major changes to improve the presentation of the model, with more emphasis on explaining the formulation. Hopefully the revised version will improve the clarity of the paper.
>>> ' The empirical evaluation of the model could be strengthened by directly comparing the DPCN to related works on non-synthetic datasets.'
We agree that the empirical evaluation could be strengthened by comparing DPCN with other models in tasks like denoising, classification etc., on large image and video datasets. However, to scale this model to larger inputs we require convolutional network like models, similar to many other methods. This is an on going work and we are presently working on a convolutional model for DPCN.
>>>'In the beginning of the section 2.1, please define P, D, K to improve clarity.
>>> In section 2.2, little explanation about the pooling matrix B is given. Also, more explanations about equation 4 would be desirable.
>>> What is z_{t} in Equation 11?'
Corrected. These are explained more clearly in the revised paper. z_{t} is the Gaussian transition noise over the parameters.
>>> 'In Section 2.2, its not clear how u_{hat} is computed. '
This is moved into section. 2.4 in the revised paper, where more explanation is provided about u_{hat}. |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods. | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | UUlHmZjBOIUBb | review | 1,362,353,160,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer d966"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-specific nodes. It builds on the previous multi-view harmonium (MVH) algorithm by adding (sigmoidal) switch parameters that turn a connection on or off between a view and hidden node and uses gradient descent to learn those switch parameters. The optimization is similar to MVH, with a slight modification on the joint distribution between views and hidden nodes, resulting in a change in the gradients for all parameters and a new switch variable to descend on.
This new algorithm, therefore, is somewhat novel; the quality of the explanation and writing is high; and the experimental quality is reasonable.
Pros
1. The paper is well-written and organized.
2. The algorithm in the paper proposes a way to avoid hand designing shared and private (view-specific) nodes, which is an important contribution.
3. The experimental results indicate some interesting properties of the algorithm, in particular demonstrating that the algorithm extracts reasonable shared and view-specific hidden nodes.
Cons
1. The descent directions have W and the switch parameters, s_kj, coupled, which might make learning slow. Experimental results should indicate computation time.
2. The results do not have error bars (in Table 1), so it is unclear if they are statistically significant (the small difference suggests that they may not be).
3. The motivation in this paper is to enable learning of the private and shared representations automatically. However, DWH (only a shared representation) actually seems to perform generally better that MVH (shared and private). The experiments should better explore this question. It might also be a good idea to have a baseline comparison with CCA.
4. In light of Con (3), the algorithm should also be compared to multi-view algorithms that learn only shared representations but do not require the size of the hidden-node set to be fixed (such as the recent relaxed-rank convex multi-view approach in 'Convex Multiview Subspace Learning', M. White, Y. Yu, X. Zhang and D. Schuurmans, NIPS 2012). In this case, the relaxed-rank regularizer does not fix the size of the hidden node set, but regularizes to set several hidden nodes to zero. This is similar to the approach proposed in this paper where a node is not used if the sigmoid value is < 0.5.
Note that these relaxed-rank approaches do not explicitly maximize the likelihood for an exponential family distribution; instead, they allow general Bregman divergences which have been shown to have a one-to-one correspondence with exponential family distributions (see 'Clustering with Bregman divergences' A. Banerjee, S. Merugu, I. Dhillon and J. Ghosh, JMLR 2005). Therefore, by selecting a certain Bregman divergence, the approach in this paper can be compared to the relaxed-rank approaches. |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods. | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | tt7CtuzeCYt5H | comment | 1,363,857,240,000 | DNKnDqeVJmgPF | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper.
2. It is true that the separation between digit features and noises in our model is not perfect. But it is also true that view-specific features contain more noisy features than the shared ones.
We appreciate your suggestions about the additional experiments about de-noising digits, and we will present the result of the experiments if we get a chance. |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods. | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | qqdsq7GUspqD2 | comment | 1,363,857,540,000 | UUlHmZjBOIUBb | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH.
2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results.
3. MVHs are often outperformed by DWHs unless the sizes of latent node sets are not carefully chosen, and this is one of the most important reason for introducing switch parameters. To make our motivation clear, we assigned 50% of hidden nodes as shared, and evenly assigned the rest of hidden nodes as visible nodes for view-specific nodes of each view. We didn't compare our method to CCA, because we thought DWH would be a better example of models with only a shared representation.
4. We were not aware of the White et al.'s work when we submitted our work, and therefore couldn't make comparison with their model. |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and learn the switch parameter while training. Numerical experiments on synthetic and a real-world dataset demonstrate the useful behavior of the SA-MVH, compared to existing multi-view feature extraction methods. | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | DNKnDqeVJmgPF | review | 1,360,866,060,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer 0e7e"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmonium (MVH) (Kang&Choi, 2011) where hidden units were separated into a shared set and view-specific sets. Unlike MVH which explicitly restricts edges, the visible and hidden units in the proposed SA-MVH are fully connected to each other with switch parameters s_{kj} indicating how likely the j-th hidden unit corresponds to the k-th view.
It would have been better if the distribution of s_{kj}'s (or sigma(s_{kj})) was provided. Unless the distribution has clear modes near 0 and 1, it would be difficult to tell why this approach of learning w^{(k)}_{ij} and s_{kj} separately is better than just learning ilde{w}^{(k)}_{ij} = w^{(k)}_{ij} sigma s_{kj} all together (as in dual-wing harmonium, DWH). Though, the empirical results (experiment 2) show that the features extracted by SA-MVH outperform both MVH and DWH.
The visualizations of shared and view-specific features from the first experiment do not seem to clearly show the power of the proposed method. For instance, it's difficult to say that the filters of roman digits from the shared features do seem to have horizontal noise. It would be better to try some other tasks with the trained model. Would it be possible to sample clean digits (without horizontal or vertical noise) from the model if the view-speific features were forced off? Would it be possible to denoise the corrupted digits? and so on..
Typo:
- Fig. 1 (c): sigma(s_{1j}) and sigma(s_{2j}) |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting. | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | D1cO7TgVjPGT9 | review | 1,361,300,640,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 71f4"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: In this paper, the authors consider unsupervised metric learning as a
density estimation problem with a Parzen windows estimator based on
Euclidean metric. They use maximum likelihood method and EM algorithm
for deriving a method that may be considered as an unsupervised counterpart to neighbourhood component analysis. Various versions of the method provide good results in the clustering problems considered.
+ Good and interesting conference paper.
+ Certainly novel enough.
- Modifications are needed to combat the problems of overfitting,
local minima, and computational load in the basic approach proposed.
Some of these improvements are heuristic or seem to require hand-tuning.
Specific comments:
- The authors should refer to the paper S. Kaski and J. Peltonen,
'Informative discriminant analysis', in T. Fawcett and N. Mishna (Eds.),
Proc. of the 20th Int. Conf. on Machine Learning (ICML 2003), pp. 329-336,
AAAI Press, Menlo Park, CA, 2003.
In this paper, essentially the same technique as Neighbourhood Component
Analysis is defined under the name Informative discriminant analysis
one year prior to the paper by Goldberger et al., your reference [16].
- In the beginning of page 6 the authors state: 'Following [1, 2], the data
is progressively corrupted by adding dimensions of white Gaussian noise,
then whitened.' In this case, whitening amplifies Gaussian noise, so that
it has the same power as the underlying data. Obviously this is the reason
why the experimental results approach to a random guess when the dimensions of the white noise increase sufficiently. The authors should mention that in real-world applications, one should not use whitening in this kind of situations, but rather compress the data using for example principal component analysis (PCA) without whitening for getting rid of the extra dimensions corresponding to white Gaussian noise. Or at least use the data as such without any whitening. |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting. | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | pRFvp6BDvn46c | review | 1,362,491,220,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 61c0"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: Summary of contributions:
The paper presents a robust algorithm for density estimation. The main idea is to model the density into a product of two independent distributions: one from a Parzen windows estimation (for modeling a low dimensional manifold) and the other from a Gaussian distribution (for modeling noise). Specifically, leave-one-out log-likelihood is used as the objective function of Parzen window estimator, and the joint model can be optimized using Expectation Maximization algorithm. In addition, the paper presents an analytical solution for M-step using eigen-decomposition. The authors also propose several heuristics to address local optima problems and to improve computational efficiency. The experimental results on synthetic data show that the proposed algorithm is indeed robust to noise.
Assessment on novelty and quality:
Novelty:
This paper seems to be novel. The main ideas (using leave-one-out log-likelihood and decomposing the density as a product of Parzen windows estimator and a Gaussian distribution) are very interesting.
Quality:
The paper is clearly written. The method is well motivated, and the technical solutions are quite elegant and clearly described. The paper also presents important practical tips on addressing local optima problems and speeding up the algorithm.
In experiments, the proposed algorithm works well when noise dimensions increase in the data. The experiments are reasonably convincing, but they are limited to very low-dimensional, toy data. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it’s unclear how the proposed method will perform on real data.
Although interesting, the assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic. For example, how can this model handle the cases when noise are added to the low-dimensional manifold, not as orthogonal “noise dimension”?
Other comments:
- Figure 1 is not very interesting since even NCA will learn near-isotropic covariance, and the baseline method seems to be PCA whitening, not PCA.
Pros and cons:
pros:
- The paper seems sufficiently novel.
- The main approach and solution are technically interesting.
- The experiments show proof-of-concept (albeit limited) demonstration that the proposed method is robust to noise dimensions (or irrelevant features).
cons:
- The experiments are limited to very low-dimensional, toy datasets. Evaluation on more real-world datasets would have been much more compelling. Without such evaluation, it’s unclear how the proposed method will perform on real data.
- The assumption about modeling the data density as a product of two independent distributions can be too strong and unrealistic (see comments above). |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting. | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | iGfW_jMjFAoZQ | review | 1,362,428,640,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 18ca"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: Summary of contributions:
1. The paper proposed an unsupervised local component analysis (LCA) framework that estimates the Parzen window covariance via maximizing the leave-one-out density. The basic algorithm is an EM procedure with closed form updates.
2. One further extension of LCA was introduced, which assumes two multiplicative densities, one is Parzen window (non Gaussian) and the other is a global Gaussian distribution.
3. Algorithms was designed to scale up the algorithms to large data sets.
Assessment of novelty and quality:
The work looks quite reasonable. But the approach seems to be a bit straightforward. The work is perhaps not very deep or inspiring.
My major concern is, other than the described problem setting being tackled, mostly toy problems, I don't see the significance of the work for addressing major machine learning challenges. For example, the authors argued the approach might be a good preprocessing step, but in the experiments, there is nothing like improving machine learning (e.g. classification) via such a pre-processing of data.
It's disappointing to see that the authors didn't study the identifiability of the Parzen/Gaussian model. Addressing this issue should have been a good chance to show some depth of the research. |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.e., a scalar multiplicative factor). In this paper, we propose to learn a full Euclidean metric through an expectation-minimization (EM) procedure, which can be seen as an unsupervised counterpart to neighbourhood component analysis (NCA). In order to avoid overfitting with a fully nonparametric density estimator in high dimensions, we also consider a semi-parametric Gaussian-Parzen density model, where some of the variables are modelled through a jointly Gaussian density, while others are modelled through Parzen windows. For these two models, EM leads to simple closed-form updates based on matrix inversions and eigenvalue decompositions. We show empirically that our method leads to density estimators with higher test-likelihoods than natural competing methods, and that the metrics may be used within most unsupervised learning techniques that rely on such metrics, such as spectral clustering or manifold learning methods. Finally, we present a stochastic approximation scheme which allows for the use of this method in a large-scale setting. | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | c2pVc0PtwzcEK | review | 1,364,253,000,000 | mLr3In-nbamNu | [
"everyone"
] | [
"Nicolas Le Roux, Francis Bach"
] | ICLR.cc/2013/conference | 2013 | review: First, we would like to thank the reviewers for their comments.
The main complaint was that the experiments were limited to toy problems. Since it is always hard to evaluate unsupervised learning algorithms (what is the metric of performance), the experiments were designed as a proof of concept. Hence, we agree with the reviewers and would love to see LCA tried and evaluated on real problems.
For the comment about the required modifications to avoid overfitting, there is truly only one parameter to set, i.e., the lambda parameter. All the others can easily be set to default values. |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero. | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | cAqVvWr0KLv0U | review | 1,362,183,240,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer 1567"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: In [10], the authors had previously proposed modifying the network
parametrization, in order to ensure zero-mean hidden unit activations across training examples (activity centering) and zero-mean derivatives (slope centering). This was achieved by introducing skip-connections between layers l-1 and l+1 and adding linear components to the non-linearity of layer l: these new parameters aren't learnt however, but instead are adjusted deterministically to enforce activity and slope centering. These ideas had initially been proposed by Schraudolph in earlier work, with [10] showing that these tricks significantly improved convergence of deep networks while also making the connection to second order methods.
In this work, the authors proposed adding an extra scaling parameter to the non-linearity, which is adjusted in order to make the digonal terms of the Hessian / Fisher Information matrix closer to unity. The authors study the effect of these 3 transformations by:
(1) measuring properties of the Hessian matrix with and without transformations, as well as angular distance of the resulting gradients to 2nd order gradients;
(2) comparing the overall classification convergence speed for a 2 and 3 layer MLPs on MNIST and finally;
(3) studying its effect on a deep auto-encoder.
While I find this research direction particularly interesting, I find the
overlap between this paper and [10] to be rather troubling. While their analysis of slope / activity centering is new (and a more direct test of their
hypothesis), I feel that the case for these transformations had already been
made in [10]. More importantly, evidence for the 3rd transformation is rather weak: it seems to slightly help convergence of 3-layer models and also helps in making the diagonal elements of the Hessian more unimodal. However, including gamma seem to rotate gradients *away* from 2nd order gradients. Also, their method did not seem to help in the deep auto-encoder setting: using gamma in the encoder network did not improve convergence speed, while using gamma in both encoders/decoders led to gamma either blowing-up or going to zero. While you would expect a diagonal approximation to a second-order method to help with the problem of dead-units, adding gamma did not seem to help in this respect.
Similarities between this paper and [10] are also evident in the writing itself. Large portions of Sections 1, 2 and 3 appear verbatim in [10]. This needs to be addressed prior to publication. The math of Section 3 could also be simplified by writing out gradients of log p (for each parameter heta) and then simply stating the general form of the FIM as E_eps[ dlogp/dtheta^T dlogp / dtheta]. As it stands Eqs. (12-17) are slightly inaccurate, as elements of the FIM should include an expectation over epsilon.
Summary: I find the direction promising but the conclusion to be somewhat confusing / disappointing. The premise for gamma seemed well motivated and I expected more concrete evidence explaining the need for this transformation. Unfortunately, I am left wondering where things went wrong: some missing theoretical insight, wrong update rule on gamma or other ?
Other:
* Authors should consider using df/dx instead of the more ambiguous f' notation.
* Could the authors clarify what they mean by: 'transforming the model instead of the gradient makes it easier to generalize to other contexts such as variational Bayes ?' One downside I see to transforming the model instead of the gradients is that it obfuscates the link to second order methods and might thus hide useful insights.
* Section 4: 'algorith' -> algorithm |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero. | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | og9azR3sTxoul | review | 1,362,399,720,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer b670"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: This paper builds on previous work by the same authors that looks at performing dynamic reparameterizations of neural networks to improve training efficiency. The previously published approach is augmented with an additional parameter (gamma) which, although it is argued should help in theory, doesn't seem to in practice. Theoretical arguments for why the standard gradient computed under this reparameterization will be closer to a 2nd-order update are made, and experiments are conducted. While the theoretical arguments are pretty weak in my opinion (see detailed comments below), the experiments that looks at eigenvalues of the Hessian are somewhat more convincing, although they indicate that the originally published approach, without the gamma modification, is doing a better job.
Pros:
- reasonably well written
- experiments looking at eigenvalue distributions are interesting
Cons:
- actual method is similar to authors' previous work in [10] and the older method of Schraudolph [12]
- the new modification doesn't seem to improve training efficiency, and even makes the eigenvalue distribution worse
- there seem to be problems with the theoretical analysis (maybe the authors can address this in their response?)
///// Detailed comments \\
Because it sounds similar to what you're doing, I think it would be helpful to give a slightly more detailed description of Schraudolph's 'gradient factor centering'. Does it correspond exactly to what you are doing in the case of neural nets? And if so, could you give an interesting example of how to apply your method to other models where Schraudolph's method would no longer apply?
I don't understand what you mean by 'many competing paths' at the bottom of page 2.
And when talking about 'linear dependencies' from x to y, what exactly do you mean? Do you mean the 1st-order components of the Taylor series of the true mapping or something else? Also, you might want to use affine when discussing functions that are linear + constant to be more technically precise.
Can the arguments in section 3 be applied to network with more than 1 hidden layer?
A concern I have with the analysis in section 3 is that, while assuming uncorrelated hidden unit outputs might be somewhat sensible (although I feel that our intuitions about how neural networks model certain mappings - such as 'representing different things' may be inaccurate), it seems less reasonable to assume that inputs (x) are uncorrelated with the outputs of the units, which seems to be needed to show that off-diagonal terms are zero (other than for eqn 12). You also seem to assume that certain 1st-derivatives of unit outputs are uncorrelated with various quantities (inputs, other unit outputs, and unit derivatives), which I don't think follows from the assumptions about the outputs of the units being uncorrelated with each other (but if this is indeed true, you should prove it or provide a reference). I think you should apply more rigor to these arguments for them to be convincing.
I would recommend using an exact method to compute the Hessian. For example, you can compute it using n matrix-vector products, and tools for computing these automatically for any computational graph are widely available, as are particular formulae for neural networks. Such a method would be no more costly than what you are doing now, which involves n gradient computations.
The discussion surrounding equation 19 is an somewhat inaccurate and oversimplified account of the role that a constant like mu has in a second-order update rule like eqn. 19. This is a well studied and highly complex problem which doesn't really have to do with issues surrounding the inversion of the Hessian 'blowing up' so much as the problems of break-downs in model trust that occur when computing proposals based on local quadratic models of the objective.
Your experiments seem to suggest that the eigenvalues are more even when you leave out the gamma parameter. How do you reconcile this with your theoretical analysis?
Why do you show a histogram of diagonal elements as opposed to eigenvalues in figure 2? I would argue that the concentration of the eigenvalues is a much better indicator of how close the Hessian matrix is to the identity (and hence how close the gradient is to being the same as a 2nd-order update) than what the diagonal entries look like. The diagonal entries of a highly non-diagonal matrix aren't particularly meaningful to look at.
Also, since your analysis was done using the Fisher, why not examine this matrix instead of the Hessian in your experiments? |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero. | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | Id_EI3kn5mX4i | review | 1,362,387,060,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer c3d4"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: * A brief summary of the paper's contributions, in the context of prior work.
This paper extends the authors' previous work on making sure that the hidden units in a neural net have zero output and slope on average, by also using direct connections that model explicitly the linear dependencies. The extension introduces another transformation which changes the scale of the outputs of the hidden units: essentially, they try to normalize both the scale and the slope of the outputs to one. This is done (essentially) by introducing a regularization parameter that encourages the geometric mean of the scale and the slope to be one.
The paper's contributions are also to give a theoretical analysis of the effect of the proposed transformations. The already proposed tricks are shown to make the non-diagonal elements of the Fisher information matrix closer to zero. The new transformation makes the diagonal elements closer to each other in scale, which is interesting as it's similar to what natural gradient does.
The authors also provide an empirical analysis of how the proposed method is close to what a second-order method would do (albeit on a small neural net). The experiment with the angle between the gradient and the second-order update is quite nice (I think such an experiment should be part of any paper that proposes new optimization tricks for training neural nets).
* An assessment of novelty and quality.
Generally, this is a well-written and clear paper that extends naturally the authors' previous work. I think that the analysis is interesting and quite readable. I don't think that these particular transformations have been considered before in the literature and I like that they are not simply fixed transformations of the data, but something which integrates naturally into the learning algorithm.
* A list of pros and cons (reasons to accept/reject).
The proposed scaling transformation makes sense in theory, but I'm not sure I agree with the authors' statement (end of Section 5) that the method's complexity is 'minimal regularization' compared to dropouts (maybe in theory, but honestly implementing dropout in a neural net learning system is considerably easier). The paper also doesn't show significant improvements (beyond analytical ones) over the previous transformations; based on the empirical results only I wouldn't necessarily use the scaling transformation. |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scale of the outputs of each hidden neuron, and secondly by analyzing the connections to second order optimization methods. We show that the transformations make a simple stochastic gradient behave closer to second-order optimization methods and thus speed up learning. This is shown both in theory and with experiments. The experiments on the third transformation show that while it further increases the speed of learning, it can also hurt performance by converging to a worse local optimum, where both the inputs and outputs of many hidden neurons are close to zero. | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | 8PUQYHnMEx8CL | review | 1,363,039,740,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: First of all we would like to thank you for your informed, thorough and kind comments. We realize that there is major overlap with our previous paper [10]. We hope that these two papers could be combined in a journal paper later on. It was mentioned that we use some text verbatim from [10]. There is some basic methodology which is necessary to explain before going to deeper explanations and we felt that it is not a big violation to use our own text. However, we have now modified the sections in question with your comments and proposals in mind. If you feel that it is necessary to check every sentence for verbatim, please consider conditional acceptance with this condition.
We agree that the evidence supporting the use of the third transformation is rather weak. We have tried to report our findings as honestly as possible and also express our doubts in the paper (see, e.g., end of Section 4).
To reviewer 'Anonymous 1567':
You argue that Eqs. (12-17) are slightly accurate. However, we have computed the expectation over epsilon with pen and paper and epsilon does vanish from the Eqs. Thus, the Eqs. in question are exact. Would you think that we should still write down the gradients explicitly?
We had considered using the df/dx notation, but decided to use the f' notation, since the derivative is taken with respect to Bx and using the df/dx notation would require us to define a new variable u = Bx and denote df/du. We think, this would further clutter the equations. Would you think this is acceptable?
We tried to clarify the meaning of 'transforming the model instead of the gradient ...' in Discussion.
To reviewer 'Anonymous b670':
We have now explained the relationship to Schraudolph's method in more detail. We provide an example and refer to Discussion of [10].
When writing about 'many competing paths' and 'linear dependencies', we have added explanations with equations in the updated version.
The question, whether the arguments in Section 3 can be applied to networks with more than one hidden layer: We have presented the theory with this simplified case in order to convey the understanding to the reader. We assume that the idea could be formulated in the general (deep) case, but writing it out would substantially complicate the equations. Our experimental results support this assumption.
About the uncorrelatedness assumption, we have added the following explanation: 'Naturally, it is unrealistic to assume that inputs $x_t$, nonlinear activations $f(cdot)$, and their slopes $f^prime(cdot)$ are all uncorrelated, so the goodness of this approximation is empirically evaluated in the next section.'
We do realize that it is possible and more elegant to compute exact solution for the Hessian matrix. However, as being more error prone, it would require careful checking by, e.g., some approximative method. As the mere approximation suits our needs well, we refrained from doing the extra work for the exact solution. We have also acknowledged this in the paper.
Regarding mu in Eq. 19: Thanks for this remark. We have reformulated the text surrounding Eq. 19. Could you kindly provide further suggestions and/or references if you still find it unsatisfactory.
Experiments on eigenvalue distribution: Fig. 1(a) suggest that there is no clear difference between the eigenvalue distributions with our without gamma (the vertical position of the plot is irrelevant since it corresponds to choosing a different learning rate).
We show histogram of diagonal elements in order to distinguish between weights. For instance, the colors in Fig.2 could not have been used otherwise.
Fisher Information vs. Hessian matrix: This is a relevant point for the future work. The Hessian describes the curvature of the actual optimization problem. We chose Fisher information matrix in the theoretical part simply because it has more compact equations. As we note in the paper, 'the hessian matrix is closely related to the Fisher information matrix, but it does depend on the output data and contains more term'. We argue that the terms present in Fisher information matrix will make our point clear and adding the other terms included in the Hessian would just be additional clutter.
Tommi Vatanen and Tapani Raiko |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | boGLoNdiUmbgV | review | 1,362,582,360,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 51ff"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: This paper attempts at comparing mixture of factorial distributions (called product distributions) to RBMs. It does so by analyzing several theoretical properties, such as the smallest models which can represent any distribution with a given number of strong modes (or at least one of these distributions) or the smallest mixture which can represent all the distributions of a given RBM.
The relationship between RBMs and other models using hidden states is not fully understood and any clarification is welcome. Unfortunately, not only I am not sure the MoP is the most interesting class of models to analyze, but the theorems focus on extremely specific properties which severely limits their usefulness:
- the definition of strong modes makes the proofs easier but it is hard to understand how they relate to 'interesting' distributions. I understand this is a very vague notion but I would have appreciated hints about how the distributions we care about tend to have a high number of strong modes.
- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.
Additionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).
In conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd.
Minor comments:
- Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set.
- Proposition 8: I think that G_3 should be G_4. |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | dPNqPnWus1JhM | review | 1,362,219,240,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 6c04"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: This paper compares the representational power of Restricted Boltzmann Machines
(RBMs) with that of mixtures of product distributions. The main result is that
RBMs can be exponentially more efficient (in terms of the number of parameters
required) to represent some classes of probability distributions. This provides
theoretical justifications to the intuition behind the motivation for
distributed representations, i.e. that the combinations of an RBN's hidden
units can give rise to highly varying distributions, with a number of modes
exponential in the model's size.
This paper is very dense, and unfortunately I had to fast-forward through it in
order to be able to submit my review in time. Although most of the derivations
do not appear to be that complex, they build on existing results and concepts
that the typical machine learning crowd is typically unfamiliar with. As a
result, one can be quickly overwhelmed by the amount of new material to digest,
and going through all steps of all proofs can take a long time.
I believe the results are interesting since they provide a theoretical
foundation to ideas that have been motivating the use of distributed
representations. As a result, I think they are quite relevant to current
research on learning representations, even if the practical insights seem
limited.
The maths appear to be solid, although I definitely did not check them in
depth. I appreciate the many references to previous work.
Overall, I think this paper deserves being published, although I wish it was
made more accessible to the general machine learning audience, since in its
current state it takes a lot of motivation to go through it. Providing
additional discussion throughout the whole paper on the motivations and
insights behind these many theoretical results, instead of mostly limiting them
to the introduction and discussion, would help the understanding and make the
paper more enjoyable to read.
Pros: relevant theoretical results, (apparently) solid maths building on previous work
Cons: requires significant effort to read in depth, little practical use
Things I did not understand:
- Fig. 1 (as a whole)
- Last paragraph of 1.1: why is this interesting?
- Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning
of all these lines -- also some explanations come after it is referenced, which
does not help)
- '(...) and therefore it contains distributions with (...)': I may be missing
something obvious, but I did not follow the logical link ('therefore')
- I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or
I am just not getting it.
Typos or minor points:
- It seems like Fig. 3 and 4 should be swapped to match the order in which they
appear in the text
- 'Figure 3 shows an example of the partitions (...) defined by the models
M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be
really clear
- Deptartment (x2)
- Lebsegue
- I believe the notation H_n is not explicitly defined (although it can be
inferred from the definition of G_n)
- There is a missing reference with a '?' on p. 9 after 'm <= 9'
- It seems to me that section 6 is also related to the title of section 5.
Should it be a subsection?
- 'The product of mixtures represented by RBMs are (...)': products
- 'Mixture model (...) generate': models |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | vvzH6kFyntmsR | comment | 1,364,258,160,000 | FdwnFIZNOxF5S | [
"everyone"
] | [
"anonymous reviewer 6c04"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for the updated version, I've re-read it quickly and it's indeed a bit clearer! |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | dYGvTnylo5TlF | review | 1,361,559,180,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 91ea"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: The paper analyses the representational capacity of RBM's, contrasting it with other simple models.
I think the results are new but I'm definitely not an expert on this field. They are likely to be interesting for people working on RBM's, and thus to people at ICLR. |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and characterizations of possible modes and support sets of multivariate probability distributions that can be represented by both model classes. We find, in particular, that an exponentially larger mixture model, requiring an exponentially larger number of parameters, is required to represent probability distributions that can be represented by the restricted Boltzmann machines. The title question is intimately related to questions in coding theory and point configurations in hyperplane arrangements. | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | FdwnFIZNOxF5S | review | 1,363,384,620,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"Guido F. Montufar, Jason Morton"
] | ICLR.cc/2013/conference | 2013 | review: We thank all three reviewers for the helpful comments, which enabled us to improve the paper. We have uploaded a revision to the arxiv taking into account the comments, and respond to some specific concerns below.
We were unsure as to whether we should make the paper longer by providing more in-line intuition around the steps of the proof of our main results. This would address the concerns of Reviewers 6c04 and 51ff, who thought some additional intuition throughout would be helpful, while Reviewer 51ff felt that the paper was perhaps too long as it was. We elected to balance these concerns by making significant changes to improve clarity without greatly expanding the exposition, making a net addition of about a page of text. However, by moving some material to the appendix, the main portion of the paper has been reduced in length to 14 pages.
Responding to specific comments:
Reviewer 6c04:
>> Things I did not understand:
>>- Fig. 1 (as a whole)
We have reworked this figure and improved the explanation in the caption; the intensity of the shading represents the value of log(k), that is the function $f(m,n) = min { log(k): mathcal{M}_{n,k}$ contains RBM_{n,m} }$.
>>- Last paragraph of 1.1: why is this interesting?
Since we are arguing that the sets of probability distributions representable by RBMs and MoPs are quite different, we thought it would be interesting to mention what is known about when these two sets do intersect. We have added a comment about this.
>>- Fig. 5 (not clear why it is in some kind of pseudo-3D and what is the meaning of all these lines -- also some explanations come after it is referenced, which does not help)
We have reworked the figure and added additional explanation in the text where the figure is referenced. This is a picture of the interior of a 3-dimensional simplex (a tetrahedron with vertices corresponding to the outcomes (0,0), (0,1), (1,0), (1,1)), with three sets of probability distributions depicted. The curved set is a 2-dimensional surface. The regions at the top and bottom are polyhedra, and the lines in the original figure were the edges of these polyhedra (the edges in back have now been removed to make the rendering clearer). Additionally, we linked to an interactive 3-d graphic object of Fig. 5. Using Adobe Acrobat Reader 7 (or higher) the reader can rotate and slice this object in 3-d.
>>- '(...) and therefore it contains distributions with (...)': I may be missing something obvious, but I did not follow the logical link ('therefore')
We expanded and rephrased this to hopefully be more clear.
>>- I am unable to parse Remark 22, not sure if there is a typo (double 'iff') or I am just not getting it.
We rewrote this remark, sorry for the confusion. The meaning was that the three statements (X iff Y iff Z) are equivalent.
>>Typos or minor points:
>> - It seems like Fig. 3 and 4 should be swapped to match the order in which they appear in the text
>>- 'Figure 3 shows an example of the partitions (...) defined by the models M_2,4 and RBM_2,3' -> mention also 'for some specific parameter values' to be really clear
>>- Deptartment (x2)
>>- Lebsegue
>>- I believe the notation H_n is not explicitly defined (although it can be inferred from the definition of G_n)
>>- There is a missing reference with a '?' on p. 9 after 'm <= 9'
>>- It seems to me that section 6 is also related to the title of section 5. Should it be a subsection?
>>- 'The product of mixtures represented by RBMs are (...)': products
>>- 'Mixture model (...) generate': models
Thank you, we fixed these.
Reviewer 51ff:
>>In conclusion, this paper tackles a problem which seems to be too contrived to be of general interest. Further, it is written in an unfriendly way which makes it more appropriate to a very technical crowd.
>>- the fact that there are exponentially many inference regions for an RBM whereas there are only a linear number of them for a MoP seems quite obvious, merely by counting the number of hidden states configurations. I understand this is far from a proof but this is to me more representative of the fact that one does not want to use the hidden states as a new representation for a MoP, which we already knew.
In part this is simply a difference of philosophy. Some place greater emphasis on an intuition or demonstration on a dataset, while others prefer to see a proof. We recognize we may not have a lot to offer those comfortable relying upon their intuitive or empirical grasp of the situation, and instead aim to provide some mathematical proof to back up that intuition and satisfy the second group.
In trying to show that one class of models (RBMs or distributed representations) is better than another (here, non-distributed representations or naive Bayes models) at representing complex distributions, one must make a choice of criteria for comparison. One can pick, inevitably arbitrarily, a dataset for comparison and produce an empirical comparison. To provide a proof or theoretical comparison, one must choose a metric of complexity. Of course, we always want larger and more natural datasets and broader metrics, but one must start somewhere. We felt that in measuring the complexity of a distribution, the bumpiness of a probability distribution, or number of local maxima, modes, or strong modes in the Hamming topology was a reasonable place to start. While we examined other metrics of distribution complexity, this was one that provided enough leverage to distinguish the models. In the Discussion section, we talk about why multi-information, for example, is not suitable for making this distinction. Making such a choice of metric is the unfortunate price of theoretical justifications.
Additionally, the number of inference regions was not claimed to be new, but part of the exposition about the widespread intuition regarding distributed representations. We have added some exposition to clarify this.
Why we chose MoP: we wanted to compare distributed representations with non-distributed representations. Since we are interested in learning representations, these should be two models with hidden variables that hold the representation. For a non-distributed model with hidden variables and the same observables as an RBM, the na'ive Bayes or MoP model is canonical. For example, a k-way interaction model might also be a good comparison, but it lacks hidden nodes.
>>Additionnally, the paper is very heavy on definitions and gives very little intuition about the meaning of the results. Theorem 29 is a prime example as it takes a very long time to parse the result and I could really have used some intuition about the meaning of the result. This feeling is reinforced by the length of the paper (18 when the guidelines mentioned 9) and the inclusion of propositions which seem anecdotal (Prop.7, section 2.1, Corollary 18).
Sorry for the confusion. The introduction, as well as Figure 1 is devoted to explaining and interpreting Theorem 29. The statements therein such as 'We find that the number of parameters of the smallest MoP model containing an RBM model grows exponentially in the number of parameters of the RBM for any fixed ratio $0!<!m/n!<!infty$, see Figure 1' are hopefully more-intuitive corollaries of Theorem 29. The structure of the paper is to try to put the intuitive explanation of the results first, then give the (necessarily technical) proof showing how the results were obtained. We have added a pointer before Theorem 29 to indicate this.
In the revision we added explanations providing additional intuition as to why we are making certain definitions, and a road map of how the main results are proved.
>>Minor comments:
>> - Definition 2, you have that C is included in {0, 1}^n. That makes C a vector, not a set.
No, a subset as we write $mathcal{C} subset mathcal{X}$ of the set of (binary) strings $mathcal{X}$ of length n is again a set of (binary) strings. One could of course interpret it in terms of a vector of indicator functions, but this is not the approach needed here.
>> - Proposition 8: I think that G_3 should be G_4.
Sorry for the confusion. Again this is correct as is; G_4 would refer to binary strings of length 4, while the Proposition concerns strings of length 3. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | TTDqPocbXWPbU | review | 1,364,548,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: Hi,
This looks a whole lot like the semi-supervised recursive autoencoder that we introduced at EMNLP 2011 [1] and the unfolding recursive autoencoder that we introduced at NIPS 2011.
These models also have a reconstruction + cross entropy error at every iteration and hence do not suffer from the vanishing gradient problem.
The main (only?) differences are the usage of a rectified linear unit instead of tanh and restricting yourself to have a chain structure which is just a special case of a tree structure.
[1] http://www.socher.org/index.php/Main/Semi-SupervisedRecursiveAutoencodersForPredictingSentimentDistributions |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 10n94yAXr20pD | comment | 1,363,534,380,000 | 5Br_BDba_D57X | [
"everyone"
] | [
"anonymous reviewer bc93"
] | ICLR.cc/2013/conference | 2013 | reply: It's true that any deep NN can be represented by a large recurrent net, but that's not the point I was making. The sentence I commented on gives the impression that a recurrent network has the same representational power as any deep network 'while substantially reducing the number of trainable parameters'. If you construct an RNN the way you described in your answer to my remark, you don't reduce the number of trainable parameters at all.
Put differently, the impression that this particular sentence gives, is that you can simply take a recurrent net, iterate it 5 times, and you would have the same representational power as any 5-layer deep NN (with the same number of nodes in each layer as the RNN), but with only one 5-th of the trainable parameters. This is, as I'm sure you'll agree, simply not true.
Remember, my remark is only concerned with the precise wording of the message you wish to convey. I do agree that iterating the network gives you more representational power for a fixed number of trainable parameters (that is more or less what you have shown in your paper), just not that it gives you as much representational power as in the case where the recurrent weights can be different each iteration (which is what happens in an equivalent deep NN). |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | NNXtqijEtiN98 | review | 1,363,222,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | review: We are very thankful to all the reviewers and commenters for their constructive comments.
* Anonymous 8ddb:
1. Indeed, the architecture of DrSAE is similar to a deep sparse rectifier neural network (Glorot, Bordes, and Bengio, 2011) with tied weights (Bengio, Boulanger-Lewandowski and Pascanu, 2012). In addition to the loss functions used, DrSAE differs from deep sparse rectifier neural networks with tied weights in that the input projects to all layers. We note this connection in the next-to-last paragraph of Section 1, and have added the reference to the citation you suggest.
It is true that part-units are strongly connected to the inputs while categorical-units are more strongly connected to part units than to the inputs. The categorical-units seem to act like units in the top layers of a multilayer network.
2(a). The input is indeed fed into all layers. We have added an explicit mention of this in the third paragraph of section 1, and in the first paragraph of section 2.
2(b). We removed the statement suggesting that DrSAE is less subject to the vanishing gradient problem in the introduction, because we have little hard evidence for it in the paper.
However, the intuition behind the statement is somewhat opposite to Yoshua Bengio's argument: the overall 'gain' of the recurrent encoder network (without input provided to each layer) must be around 1, simply because it is trained to reconstruct the input through a linear decoder whose columns have norm equal 1. The unit activities can neither explode nor vanish over the recurrent steps because of that. Since the overall recurrent encoder has gain 1, each of the (identical) layers must have gain 1 too. Because of the reconstruction criterion, each recurrent step must also be approximately invertible (otherwise information would be lost, and reconstruction would be impossible). It is our intuition that in a sequence of invertible layers whose gain in 1, there is little vanishing gradient issues and little gradient 'diffusion' issue (the informal notion of gain can be made precise with in terms of eigenvalues of the Jacobian).
We do observed that as training of a DrSAE progresses, the magnitude of the gradient tends to equalize between all layers. But this will be the subject of future investigations.
3. The column-wise bounds on the norms of the matrices are enforced through projection on the unit sphere (i.e., column-wise scaling) after each SGD step. We have added explicit mention of this in footnote 2.
5. Units still differentiate into part-units and categorical-units with only two temporal steps, but the prototypes are not as clean. We have added a mention of this to the end of section 4. Further investigation of the effect of the choice of the encoder on the differentiation into categorical-units and part-units will be the subject of future work.
* Testing on other datasets than MNIST (Anonymous 8ddb and Anonymous a32e):
Yes, results other datasets like CIfAR would be ideal, but this will require a convolutional (or locally-connected) version of the method, since almost all architectures that yield good results on natural image datasets are of that type. We are currently working on a convolutional extension to DrSAE, which we are applying to classification of natural image datasets. But we believe that the architecture, algorithm, and results are interesting enough to be brought to the attention of the community before results on natural images become available.
That said, in preliminary testing using fully-connected DrSAE, we've obtained results superior to the deep sparse rectifier neural networks of Glorot, Bordes & Bengio (2011) on CIFAR-10; specifically, 48.19% error rate using only 200 hidden units per layer, versus their error rate 49.52% using 1000 hidden units per layer. Since Glorot et al. use a similar architecture (as discussed in point 1), this suggests that the differentiation into part-units and categorical-units improves classification performance on natural images.
* Anonymous a32e:
1. The architecture of the network is captured by equation 2 and figure 1. The loss function is specified in equations 1 and 4. The review of prior work and discussion of its relation to our network necessarily assumes familiarity with the prior work, since there is only space for a cursory summary of the published ideas upon which we draw. However, we would hope that the main analysis in the paper, in sections 3, 4, and 5, are understandable even without intimate familiarity with LISTA and the like.
2. The natural way to avoid manually chosen constants is to do an automatic search of hyperparameter space, maximizing the performance on a validation set. We hope to perform this search in the near future, as it will likely improve classification performance. As it stands, our ad-hoc parameters effectively offer a lower bound on the performance obtainable with a more rigorous search of hyperparameter space.
4. There are two kinds of 'fairness' in comparing results: 1. keep the computational complexity constant; 2. keep the number of parameters constant. The comparison between 2 and 11 time steps is intended to keep the number of parameters constant (though it does increase the computational complexity). It is unclear how one could hold both the number of parameters and the computational load constant within the DrSAE framework.
5. A more systematic exploration of encoder depth should certainly be undertaken as part of a complete search of the hyperparameter space.
* Yoshua Bengio:
1. We are presently exploring the cause of the differentiation into part-units and categorical units. In particular, we've now succeeded in inducing the differentiation using an unsupervised criterion derived from the discriminative loss of DrSAE. The interaction between our logistic loss function and the autoencoding framework thus seems to constitute the crucial ingredient beyond what is present in similar networks like your deep sparse rectifier neural networks. This work is ongoing, but we look forward to reporting this result soon. It would be interesting to explore the degree to which the rectified-linear activation function is necessary for the differentiation into part- and categorical-units. Our intuition, based upon experience with this unsupervised regularizer, as well as the fact that units differentiate even in a two-hidden-layer DrSAE, is that this activation function is not essential.
2. Please see point 2(b) in response to reviewer Anonymous 8ddb.
3 & 4. Thank you for the references. They have been included in the paper. We think it is worth noting, though, that dropout, tangent propagation, and iterative pretraining and stacking of networks (as in deep convex networks) are regularizations or augmentations of the training procedure that may be applicable to a wide class of network architectures, including DrSAE. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | vCQPfwXgPoCu7 | review | 1,364,571,960,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: Minor side comment: IN GENERAL, having a cost term at each iteration (time step of the unfolded network) does not eliminate the vanishing gradient problem!!!
The short-term dependencies can now be learned through the gradient on the cost on the early iterations, but the long-term effects may still be improperly learned. Now it may be that one is lucky (and that could apply in your setting) and that the weights that are appropriate for going from the state at t to a small cost at t+delta with small delta are also appropriate for minimizing the longer term costs for large delta.
There are good examples of that in the literature. A toy example is the recurrent network that learns the parity of a sequence. Because of the recursive nature of the solution, if you do a very good job at predicting the parity for short sequences, there is a good chance that the solution will generalize properly to much longer sequences. Hence a curriculum that starts with short sequences and gradually extends to longer ones is able to solve the problem, where only training from long ones without intermediate targets at every time step completely fails. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | __De_0xQMv_R3 | review | 1,361,907,180,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for this interesting contribution. The differentiation of hidden units into class units and parts units is fascinating and connects with what I consider a central objective for deep learning, i.e., learning representations where the learned features disentangle the underlying factors of variation (as I have written many times in the past, e.g., Bengio, Courville & Vincent 2012). Why do you think this differentiation is happening? What are the crucial ingredients of your setup that are necessary to observe that effect?
I have a remark regarding this sentence on the first page: 'Recurrence opens the possibility of sharing parameters between successive layers of a deep network, potentially mitigating the vanishing gradient problem'. My intuition is that the vanishing/exploding gradient problem is actually *worse* with recurrent nets than with regular (unconstrained) deep nets. One way to visualize this is to think of (a) multiplying the same number with itself k times, vs (b) multiplying a k random numbers. Clearly, (a) will explode or vanish faster because in (b) there will be some 'cancellations'. Recurrent nets correspond to (a) because the weights are the same at all time steps (but yes, the non-linearities derivative will be different), whereas unconstrained deep nets correspond to (b) because the weight matrices are different at each layer.
Minor point about prior work: in the very old days I worked on using recurrent nets trained by BPTT to iteratively reconstruct missing inputs and produce discriminatively trained outputs. It worked quite well. NIPS'95, Recurrent Neural Networks for Missing or Asynchronous Data.
Regarding the results on MNIST, among the networks without convolution and transformations, one should add the Manifold Tangent Classifier (0.81% error), which uses unsupervised pre-training, the Maxout Networks with dropout (0.94%, no unsupervised pre-training), DBMs with dropout (0.79%, with unsupervised pre-training), and the deep convex networks (Yu & Deng, 0.83% also with unsupervised learning). |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | uc38pbD6RhB1Z | review | 1,363,316,520,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer bc93"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: SUMMARY:
The authors describe a discriminative recurrent sparse auto-encoder, which is essentially a recurrent neural network with a fixed input and linear rectifier units. The auto-encoder is initially trained to reproduce digits of MNIST, while enforcing a sparse representation. In a later phase it is trained in a discriminative (supervised) fashion to perform classification.
The authors discuss their observations. Most prominently they describe the occurrence of two types of nodes: part-units, and categorical units The first are units that encode low-level features such as pen-strokes, whereas the second encode specific digits within the MNIST set. It is shown that before the discriminative training, the image reconstruction happens mostly by combining pen-strokes, whereas after the discriminative training, image reproduction happens mainly by the combination of a prototype digit of the corresponding class, which is subsequently transformed by adding pen-stroke-like features. The authors state that this observation is consistent with the underlying hypothesis of auto-encoders that the data lies on low-dimensional manifolds, and the auto-encoder learns to split the representation of a digit into a categoric prototype and a set of transformations.
GENERAL OPINION
The paper and the suggested network architecture is interesting and, as far as I know, quite original. It is also compelling to see the unique ways in which the unsupervised and supervised training contribute to the image reconstruction. Overall I believe this paper is a suited contribution to this conference. I have some questions and remarks that I will list here.
QUESTIONS
- From figure 5 I get the impression that the states dynamics are convergent; for sufficiently large T, the internal state of the nodes (z) will no longer change. This begs the question: is the ideal situation that where T goes to infinity? If so, could you consider the following scenario: We somehow compute the fixed, final state $z(infty)$ (maybe this can be performed faster than by simply iterating the system). Once we have it, we can perform backpropagation-through-time on a sequence where each step in time, the states are identical (the fixed-point state). This would be an interesting scenario, as you might be able to greatly accelerate the training process (all Jacobians are identical, error backpropagation has an analytical solution), and you explicitly train the system to perform well on this fixed point, transient effects are no longer important.
Perhaps I'm missing some crucial detail here, but it seems like an interesting scenario to discuss.
- On a related note: what happens if - after training - the output (image reconstruction and classification) is constructed using the state from a later/earlier point in time? How would performance degrade as a function of time?
REMARKS
- In both the abstract and the introduction the following sentence appears: 'The depth implicit in the temporally-unrolled form allows the system to exhibit all the power of deep networks, while substantially reducing the number of trainable parameters'. I believe this is an dangerous statement, as tied weights will also impose a severe restriction on representational power (so they will not have 'all the power of deep networks'). I would agree with a rephrasing of this sentence that says something along the lines of: 'The depth implicit in the temporally-unrolled form allows the system to exhibit far more representational power, while keeping the number of trainable parameters fixed'.
- I agree with the Yoshua's remark on the vanishing gradient problem. Tied weights cause every change in parameter space to be exponentially amplified/dampened (save for nonlinear effects), making convergence harder. The authors should probably rewrite this sentence.
- I deduce from the text that the system is only trained to provide output (image reconstruction and classification) at the T-th iteration. As such, the backpropagated error only is 'injected' at this point in time. This is distinctly different form the 'common' BPTT setup, where error is injected at each time step, and the authors should maybe explicitly mention this. Apparently reviewer 'Anonymous 8ddb' has interpreted the model as if it was to provide output at each time step ('the reconstruction cost found at each step which provide additional error signal'), so definitely make this more clear.
- The authors mention that they trained the DrSAE with T=11, so 11 iterations. I suspect this number emerges from a balance between computational cost and the need for a sufficient amount of iterations? Please explicitly state this in your paper.
- As a general remark, the comparison to ISTA and LISTA is interesting, but the authors go to great lengths to finding detailed analogies, which might not be that informative. I am not sure whether the other reviewers would agree with me, but maybe the distinction between categorical and part-units can be deduced without this complicated and not easy-to-understand analysis. It took me some time to figure out the content of paragraphs 3.1 and 3.2.
- I also agree with other reviewers that it is unfortunate that only MNIST has been considered. Results on more datasets, and especially other kinds of data (audio, symbolic?) might be quite informative |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 6FfM6SG2MKt8r | comment | 1,367,028,540,000 | TTDqPocbXWPbU | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you very much for your constructive comments.
There are indeed similarities between discriminative recurrent auto-encoders and the semi-supervised recursive autoencoders of Socher, Pennington, Huang, Ng, & Manning (2011a); we will add the appropriate citation to the paper. However, the networks of Socher et al. (2011a) are very similar to RAAMs (Pollack, 1990), but with a dynamic, greedy recombination structure and a discriminative loss function. As a result, they differ from DrSAE as outlined in our response to Jurgen Schmidhuber. Like the work of Socher et al. (2011a), DrSAE is based on an recursive autoencoder that receives input on each iteration, with the top layer subject to a discriminative loss. However, Socher et al. (2011a), like Pollack (1990), iteratively adds new information on each iteration, and then reconstructs both the new information and the previous hidden state from the resulting hidden state (Socher, Huang, Pennington, Ng, & Manning, 2011 reconstructs the entire history of inputs). The discriminative loss function is also applied at every iteration. In contrast, the input to DrSAE is the same on each iteration, and only the reconstruction and classification based upon the final state is optimized. The entire recursive LISTA stack constitutes a single encoder, which is decoded in a single (linear) step. Whereas Socher et al. (2011a) performs discriminative compression of a variable-length, structured input using a zero-hidden-layer encoder, our goal is static autoencoding using a deep (recursive) encoder.
Moreover, the main contribution of our paper is the demonstration of a novel and interesting hidden representation (based upon prototypes and their deformations along the data manifold), along with a network that naturally learns this representation. The hierarchical refinement of categorical-units from part-units that we observe seems unlikely to evolve in the networks of Socher et al. (2011a), since the activity of the part-units cannot be maintained across iterations by continuous input. The KL-divergence used for discriminative training in Socher et al. (2011a) is only identical to the logistic loss if the target distributions have no uncertainty (i.e., they are one-hot). Our ongoing work suggests that this difference is likely to be important for the differentiation of categorical-units and part-units. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | zzUEFMPkQcqkJ | review | 1,362,400,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer a32e"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: Authors propose an interesting idea to use deep neural networks with tied weights (recurrent architecture) for image classification. However, I am not familiar enough with the prior work to judge novelty of the idea.
On the critical note, the paper is not easy to read without good knowledge of prior work, and is pretty long. I would recommend authors to consider following to make their paper more accessible:
- the description should be shorter, simpler and self-contained
- try to avoid the ad-hoc constants everywhere
- run experiments on something larger and more difficult than MNIST - current experiments are not convincing to me; together with many hand-tuned constants, I would be afraid that this model might not work at all on more realistic tasks (or that a lot of additional manual work would be needed)
- when you claim that accuracy degrades from 1.21% to 1.49% if 2 instead of 11 time steps are used, you are comparing models with much different computational complexity: try to be more fair
- also, it would be interesting to show results for the larger model (400 neurons) with less time steps than 11
Still, I consider the main idea interesting, and I believe it would lead to interesting discussions at the conference. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | Sih8ijosvDuO_ | comment | 1,363,817,880,000 | KVmXTReW18TyN | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: Q2: In response to your query, we have just completed a run with the encoder row magnitude bound set to 1/T, rather than 1.25/T. MNIST classification performance was 1.13%, rather than 1.08%. Although heuristic, the hyperparameters used in the paper were not the result of extensive hand-tuning. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 4V-Ozm5k8mVcn | review | 1,363,400,280,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer dd6a"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: The paper describes the following variation of an autoencoder: An encoder (with relu nonlinearity) is iterated for 11 steps, with observations providing biases for the hiddens at each step. Afterwards, a decoder reconstructs the data from the last-step hiddens. In addition, a softmax computes class-labels from the last-step hiddens. The model is trained on labeled data using the sum of reconstruction and classification loss. To perform unsupervised pre-training the classification loss can be ignored initially.
It is argued that training the architecture causes hiddens to differentiate into two kinds of unit (or maybe a continuum): part-units, which mainly try to perform reconstruction, and categorical units, which try to perform classification. Various plots are shown to support this claim empirically.
The idea is interesting and original. The work points towards a direction that hasn't been explored much, and that seems relevant in practice and from the point of view of how classification may happen in the brain. Some anecdotal evidence is provided to support the part-categorical separation claim. The evidence seems interesting. Though I'm pondering still whether there may be other explanations for those plots. Training does seem to rely somewhat on finely tuned parameter settings like individual learning rates and weight bounds.
It would be nice to provide some theoretical arguments for why one should expect the separation to happen. A more systematic study would be nice, too, eg. measuring how many recurrent iterations are actually required for the separation to happen. To what degree does that separation happen with only pre-training vs. with the classification loss? And in the presence of classification loss, could it happen with shallow model, too? The writing and organization of the paper seems preliminary and could be improved. For example, it is annoying to jump back-and-forth to refer to plots, and some plots could be made more informative (see also comments below).
The paper seems to suggest that the model gradually transforms an input towards a class-template. I'm not sure if I agree, that this is the right view given that the input is clamped (by providing biases via E) so it is available all the time. Any comments?
It may be good to refer to 'Learning continuous attractors in recurrent networks', Seung, NIPS 1998, which also describes a recurrent autoencoder (though that model is different in that it iterates encoder+decoder not just encoder with clamped data).
Questions/comments:
- It would be much better to show the top-10 part units and the top-10 categorical units instead of figure 2, which shows a bunch of filters for which it is not specified to what degree they're which (except for pointing out in the text that 3 of them seem to be more like categorical units).
- What happens if the magnitude of the rows of E is bounded simply by 1/T instead of 1.25/(T-1) ? (page 3 sentence above Eq. 4) Are learning and classification results sensitive to that value?
- Last paragraph of section 1: 'through which the prototypes of categorical-units can be reshaped into the current input': Don't you mean the other way around?
- Figure 4 seems to suggest that categorical units can have winner-takes-all dynamics that disfavor other categorical units from the same class. Doesn't that seem strange?
- Section 3.2 (middle) mentions why S-I is plotted but S-I is shown and referred to before (section 3.1) and the explanation should instead go there.
- What about the 2-step model result with 400 hiddens (end of section 4)? |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | SKcvK2UDvgKxL | review | 1,362,177,060,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer 8ddb"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: Summary and general overview:
----------------------------------------------
The paper introduces Discriminative Recurrent Sparse Auto-Encoders, a new model, but more importantly a careful analysis of the behaviour of this model. It suggests that the hidden layers of the model learn to differentiate into a hierarchical structure, with part units at the bottom and categorical units on top.
Questions and Suggestions
----------------------------------------
1. Given equation (2) it seems that model is very similar to recurrent neural network with rectifier units as the one used for e.g. in [1]. The main difference would be how the model is being trained (the pre-training stage as well as the additional costs and weight norm constraints). I think this observation could be very useful, and would provide a different way of understanding the proposed model. From this perspective, the differentiation would be that part units have weak recurrent connections and are determined mostly by the input (i.e. behave as mlp units would), while categorical units have strong recurrent connections. I'm not sure if this parallel would work or would be helpful, but I'm wondering if the authors explored this possibility or have any intuitions about it.
2. When mentioning that the model is similar to a deep model with tied weights, one should of course make it clear that additionally to tied weights, you feed the input (same input) at each layer. At least this is what equation (2) suggests. Is it the case? Or is the input fed only at the first step ?
2. As Yoshua Bengio pointed out in his comment, I think Recurrent Networks, and hence DrSAE, suffer more from the vanishing gradient problem than deep forward models (contrary to the suggestion in the introduction). The reason is the lack of degrees of freedom RNNs have due to the tied weights used at each time step. If W for an RNN is moved such that its largest eigenvalue becomes small enough, the gradients have to vanish. For a feed forward network, all the W_i of different layers need to change such to have this property which seems a less likely event. IMHO, the reason for why DrSAE seem not to suffer too much from the vanishing gradient is due to (a) the norm constraint, and (b) the reconstruction cost found at each step which provide additional error signal. One could also say that 11 steps might not be a high enough number for vanishing gradient to make learning prohibitive.
3. Could the authors be more specific when they talk about bounding the column-wise norm of a matrices. Is this done through a soft constraint added to the cost? Is it done, for e.g., by scaling D if the norm exceeds the chosen bound ? Is there a projection done at each SGD step ? It is not clear from the text how this works.
4. The authors might have expected this from reviewers, but anyway. Could the authors validate this model (in a revision of the paper) on different datasets, beside MNIST? It would be useful to know that you see the same split of hidden units for more complex datasets (say CIFAR-10)
5. The model had been run with only 2 temporal steps. Do you still get some kind of split between categorical and part hidden units ? Did you attempt to see how the number of temporal steps affect this division of units ?
References:
[1] Yoshua Bengio, Nicolas Boulanger-Lewandowski, Razvan Pascanu, Advances in Optimizing Recurrent Networks, arXiv:1212.0901 |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | -uMO-UhKgU-Z_ | review | 1,368,275,760,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: Hi Jason and Yann,
Thanks for the insightful reply.
Best,
Richard |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | UEx3pAOcLlpPT | review | 1,363,223,340,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | review: * Jurgen Schidhuber:
Thank you very much for your constructive comments.
1. Like the work of Pollack (1990), DrSAE is based on an recursive autoencoder that receives input on each iteration. However, (sequential) RAAMs iteratively add new information on each iteration, and then iteratively reconstruct the entire history of inputs from the resulting hidden state. In contrast, the input to DrSAE is the same on each iteration, and only the reconstruction based upon the final state is optimized. The entire recursive LISTA stack constitutes a single encoder, which is decoded in a single (linear) step. Whereas RAAMs perform unsupervised history compression, our goal is static autoencoding. Moreover, DrSAEs perform classification in addition to autoencoding; the logistic loss component is essential to the differentiation into categorical-units and part-units (RAAMs have no discriminative component). Finally, DrSAE's encoder is non-negative LISTA (a multi-layer network of rectified linear units, with tied parameters between the layers, and a projection from the input to all layers), its decoder is linear, and it makes use of a loss function including L1 regularization and logistic classification loss (RAAMs use a single-hidden-layer sigmoidal neural network without sparsification). RAAMs and DrSAEs are both recurrent and receive some sort of input on each iteration, but they have different architectures and solve different problems; they resemble each other only in the coarsest possible manner.
2. Please see point 2(b) in response to reviewer Anonymous 8ddb; the references to the vanishing gradient problem were tangential, and have been removed.
3. As you point out, it is well-known that data set augmentations (such as translations and elastic deformation of the input) and explicit regularization of the parameters to force the corresponding invariances (such as a convolutional network structure) improve the performance of machine learning algorithms of this type. It is similarly possible to improve performance by training many instances of the same network (perhaps on different subsets of the data) and aggregating their outputs. It is standard practice to separately report performance with and without making use of these techniques. Deformations can obviously be added in later to yield improved performance. We have added a note regarding the possibility of these augmentations, along with the appropriate citations. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | KVmXTReW18TyN | comment | 1,363,664,400,000 | 4V-Ozm5k8mVcn | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: *Anonymous dd6a
Thank you very much for your helpful comments.
P2: Both the categorical-units and the part-units participate in reconstruction. Since the categorical-units become more active than the part-units (as per figure 7), they actually make a larger contribution to the reconstruction (evident in figure 9(b,c), where even the first step of the progressive reconstruction is strong).
P4: The differentiation into part-units and categorical-units does occur even with only two ISTA iterations (one pass through the explaining-away matrix), the shallowest architecture in which categorical-units can aggregate over part-units, as noted at the end of section 4. Without the classification loss, the network is an instance of (non-negative) LISTA, and categorical-units do not develop at all. Thus, only one recurrent iteration is required for categorical-units to emerge, and the classification loss is essential for categorical-units to emerge. We have added plots to figure 3 demonstrating these phenomena.
With regards to the theoretical cause of the differentiation into categorical-units and part-units, please see part 1 of our response to Yoshua Bengio.
The three plots at the end were intended to serve as supplementary materials. However, as you point out, these figures are important for the analysis presented in the text, so they have been moved into the main text.
P5: The network decomposes the input into a prototype and a sparse set of perturbations; we refer to these perturbations, encoded in the part-units, as the signal that 'transforms' the prototype into the input. That is, categorical + part ~ input. The input itself is not (and need not be) modified in the process of constructing this decomposition. The clamping of the input does not affect this interpretation.
P6: Thank you for the reference; we have included it in the paper. Of course, since Seung (1998) does not include a discriminative loss function, there is no reason to believe that categorical-units differentiated from part-units in his model.
Q1: We have made the suggested change to figure 2. Filters sorted by categoricalness are also shown in figures 5, 6, 7, and 10.
Q2: We have not yet undertaken a rigorous or extensive search of hyperparameter space. We expect the results with with the rows of E bounded by 1/T will be similar to those with a bound of 1.25/T. The (T-1) in the denominator of this bound in the paper was a typo, which we have corrected.
Q3: The assertion that 'the prototypes of categorical-units are reshaped into the current input' is mathematically equivalent to 'the current input is reshaped into the prototypes of categorical-units.' In one case, categorical + part = input; in the other, input - part = categorical. Both interpretations are actively enforced by the reconstruction component of the loss function L^U in equation 1. Since the inputs are clamped, we find it most intuitive to think of the reconstruction due to the prototypes of the categorical-units being reshaped by the part-units to match the fixed input.
Q4: When a chosen categorical-unit suppresses other categorical-units of the same class, it corresponds to the selection of a single prototype, which is both natural and desirable. It is easy to imagine that there may be classes with multiple prototypes, for which arbitrary linear combinations of the prototypes are not members of the class. For example, the sum of a left-leaning 1 and a right-leaning 1 is an X, rather than a 1.
Q5: Indeed, the ISTA-mediated relationship between S-I and D^t*D is first discussed in the second paragraph of section 3. This is the clearest explanation for the use of S-I. We have removed other potentially-confusing, secondary justifications, and further clarified the intuitive basis of this primary justification.
Q6: We have added the requested result on the 2-step model with 400 hiddens at the end of section 4. The trend is the same with 400 units as with 200 units. If the number of recurrent iterations is decreased from eleven to two, MNIST classification error in a network with 400 hidden units increases from 1.08% to 1.32%. With only 200 hidden units, MNIST classification error increases from 1.21% to 1.49%, although the hidden units still differentiate into part-units and categorical-units. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 5Br_BDba_D57X | comment | 1,363,395,000,000 | uc38pbD6RhB1Z | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: * Anonymous bc93:
We offer our sincere thanks for your thoughtful comments.
Q1: The dynamics are indeed smooth, as shown in figure 5. However, there is no reason to believe that the dynamics will stabilize beyond the trained interval. In fact, simulations past the trained interval show that the most active categorical unit often seems to grow continuously.
Q2: The image reconstruction is small for the first iteration or two, but thereafter is stable throughout the trained interval and beyond. Classification is more sensitive to the exact balance between part-units and categorical-units, and is less reliable as one moves away from the trained iteration T.
R1: Any multilayer network (say with L layers of M units) can be seen as a recurrent network with M*L units, unrolled for L time steps, which is sparsely connected (e.g. with a block upper triangular matrix). Admittedly, this would be a computationally inefficient way to run the multilayer network. But the representational power of the two networks are identical. Hence recurrent nets are not intrinsically less powerful than multilayer ones, if one is willing to make them large. DrSAE leaves it up to the learning algorithm to decide which hidden units will act as 'lower-layer' or 'upper-layer' units.
R2: The reference to the vanishing gradients problem was tangential and, given its contentious nature, has been removed from the paper. Nevertheless, please see our comments on the matter to the other reviewers.
R3: The loss functions are indeed only applied to the last iteration of the hidden units. We have added an explicit mention of this in the text to avoid confusion. Future work will explore the use of a reconstruction cost summed over time. This may have the effect of quickening the convergence of the inference and making the classification and reconstruction more stable past the training interval.
R4: The T=11 could more appropriately called T'=10, since there are 10 applications of the explaining-away matrix S, although T=11 represents the number of applications of the non-lineaity. Experiments were conducted for T=2, T=6, and T=11. The paper focuses mostly on T=11. We have added a note to this effect.
R5: While the existence of a dichotomy between part-units and categorical-units is certainly identifiable without recourse to ISTA, as is evident from figures 8 and 10, the understanding of the part-units is best framed in terms of ISTA, which predicts the learned parameters with considerable accuracy. Were it not for the fact that our network architecture is derived from ISTA, it would be remarkable that the part-units spontaneously learn parameters that so closely match with ISTA.
While perhaps unfamiliar to some readers, ISTA is simple and intuitive; we suspect that the difficulty you allude to is primarily an issue of nomenclature. With non-negative units, ISTA is just projected gradient descent on the loss function of equation 1 (the projection is onto the non-negativity constraint). We have added a note to this effect in paragraph 3.1, which we hope will make this analysis easier to follow for readers unfamiliar with ISTA.
R6: Please see our response to the other reviewers. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | PZqMVyiGDoPcE | review | 1,363,734,420,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: Interesting work! The use of relU units in an RNN is something I haven't seen before. I'd be interested in some discussion on how relU compares to e.g. tanh units in the recurrent setting. I imagine relU units may suffer less from vanishing/saturation during RNN training.
We have a related model (deep discriminative recurrent auto-encoders) for speech signal denoising, where the task is exactly denoising the input features instead of classification. It would be nice to better understand how the techniques you present can be applied in this type of regression setting as opposed to classification.
Andrew L. Maas, Quoc V. Le, Tyler M. O'Neil, Oriol Vinyals, Patrick Nguyen, and Andrew Y. Ng. (2012). Recurrent Neural Networks for Noise Reduction in Robust ASR. Interspeech 2012.
http://ai.stanford.edu/~amaas/papers/drnn_intrspch2012_final.pdf |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The network is first trained in unsupervised mode to reconstruct the input, and subsequently trained discriminatively to also produce the desired output. The recurrent network is time-unfolded with a given number of iterations, and trained using back-propagation through time. In its time-unfolded form, the network can be seen as a very deep multi-layer network in which the weights are shared between the hidden layers. The depth allows the system to exhibit all the power of deep network while substantially reducing the number of trainable parameters.
From an initially unstructured recurrent network, the hidden units of discriminative recurrent sparse auto-encoders naturally organize into a hierarchy of features. The systems spontaneously learns categorical-units, whose activity build up over time through interactions with part-units, which represent deformations of templates. Even using a small number of hidden units per layer, discriminative recurrent sparse auto-encoders that are pixel-permutation agnostic achieve excellent performance on MNIST. | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | yy9FyB6XUYyiJ | review | 1,362,604,500,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jürgen Schmidhuber"
] | ICLR.cc/2013/conference | 2013 | review: Interesting implementation and results.
But how is this approach related to the original, unmentioned work on Recurrent Auto-Encoders (RAAMs) by Pollack (1990) and colleagues? What's the main difference, if any? Similar for previous applications of RAAMs to unsupervised history compression, e.g., (Gisslen et al, AGI 2011).
The vanishing gradient problem was identified and precisely analyzed in 1991 by Hochreiter's thesis http://www.bioinf.jku.at/publications/older/3804.pdf . The present paper, however, instead refers to other authors who published three years later.
Authors write: 'MNIST classification error rate (%) for pixel-permutation-agnostic encoders' (best result: 1.08%). What exactly does that mean? Does it mean that one may not shift the input through eye movements, like in the real world? I think one should mention and discuss that without such somewhat artificial restrictions the best MNIST test error is at least 4 times smaller: 0.23% (Ciresan et al, CVPR 2012). |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | ua4iaAgtT2WVU | review | 1,362,265,800,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer b31c"
] | ICLR.cc/2013/conference | 2013 | title: review of Joint Training Deep Boltzmann Machines for Classification
review: This breaking-news paper proposes a new method to jointly train the layers of a DBM. DBM are usually 'pre-trained' in a layer-wise manner using RBMs, a conceivably suboptimal procedure. Here the authors propose to use a deterministic criterion that basically turns the DBM into a RNN. This RNN is trained with a loss that resembles that one of denoising auto-encoders (some inputs at random are missing and the task is to predict their values from the observed ones).
The view of a DBM as special kind of RNN is not new and the inpainting criterion is not new either, however their combination is. I am very curious to see whether this will work because it may introduce a new way to train RNNs that can possibly work well for image related tasks. I am not too excited about seeing this as a way to improve DBMs as a probabilistic model, but that's just my personal opinion.
Overall this work can be moderately original and of good quality.
Pros
-- clear motivation
-- interesting model
-- good potential to improve DBM/RNN training
-- honest writing about method and its limitation (I really like this and it is so much unlike most of the work presented in the literature). Admitting current limitations of the work and being explicit about what is implemented helps the field making faster progress and becoming less obscure to outsiders.
Cons
-- at this stage this work seems preliminary
-- formulation is unclear
More detailed comments:
The notation is a bit confusing: what's the difference between Q^*_i and Q^*? Is the KL divergence correct? I would expect something like:
KL(DBM probability of (v_{S_i} | v_{-S_i}) || empirical probability of ( v_{S_i} | v_{-S_i}) ). I do not understand why P(h | v_{-S_i}) shows up there.
It would be nice to relate this method to denoising autoencoders. In my understanding this is the analogous for RNN-kind of networks.
Doesn't CG make the training procedure more prone to overfitting on the minibatch? How many steps are executed?
Important details are missing. Saying that error rate on MNIST is X% does not mean much if the size of the network is not given.
Overall, this is a good breaking news paper. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | g6eHAgMz5csdN | review | 1,363,214,940,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We have updated our paper and are waiting for arXiv to make the update public. We'll add the updated paper to this webpage as soon as arXiv makes the public link available.
To anonymous reviewer 55e7:
-We'd like to draw to your attention that this paper was submitted to the workshops track. We agree with you that the results are very preliminary, which is why we did not submit it to the conference track. We know that the web interface for reviewers doesn't make it clear which track a paper was submitted to.
-We don't find the connection to NADE to be particularly meaningful, for the following reasons:
1) You can think of *any* model trained with maximum likelihood as learning to predict subsets of the inputs from each other. This is just a consequence of the chain rule of probability, p(x,y,z) = p(x)p(y|x)P(z|y,x).
2) For NADE, each variable appears only in one term of the cost function, and is always predicted given the same subset of other variables as input. In our algorithm each variable appears in an exponential number of terms, each with a different input set.
3) NADE defines the model such that P(v_i | v_1, ..., v_{i-1}) is just specified to be what you'd get by running one step of mean field in an RBM. NADE thus uses exact inference in the model that it is training. We use approximate inference, and we also run the mean field to convergence, rather than just doing one step.
4) A trained JDBM can easily predict any subset of variables given any other subset of variables, but NADE runs into problems with intractable inference for most queries. NADE is based on designing a model so that exact inference can compute P(v) easily, but this does not translate into estimating one half of v given the other half, because so many states need to be summed out. ie, to estimate P(v_n | v_1, ... v_k) NADE must explicitly sum over all joint assignments to v_k, ..., v_n-1. This is the case even for queries that follow the same structure as the NADE model.
5) NADE is based on exact maximum likelihood learning. Our algorithm is based on an approximation to pseudolikelihood learning.
To anonymous reviewer b31c:
-Yes, I wrote the wrong expression for the KL divergence. It's fixed now.
-Regarding denoising autoencoders, yes, it would be interesting to connect them. Some denoising autoencoders can be understood as doing score matching on RBMs. It's not clear how to extend that view of denoising autoencoders to the setting we explore in this paper (discrete rather than continuous variables, multiple hidden layers rather than one hidden layer).
-CG can overfit the minibatch, but you can compensate for this by using big minibatches. The original DBM paper already uses CG for the supervised fine tuning. Our best results were with 5 CG steps per minibatch of 1250 examples. We have updated the workshop paper to specify these details.
-The size of the network is the same as in the original DBM paper.
We have updated the paper to specify this. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | nnKMnn0dlyqCD | review | 1,362,172,860,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer 55e7"
] | ICLR.cc/2013/conference | 2013 | title: review of Joint Training Deep Boltzmann Machines for Classification
review: The authors aim to introduce a new method for training deep Boltzmann machines. Inspired by inference procedure they turn the model into two hidden layers autoencoder with recurrent connections. Instead of reconstructing all pixels from all (perhaps corrupted) pixels they reconstruct one subset of pixels from the other (the complement).
Overall this paper is too preliminary and there are too few experiments and most pieces are not new. However with better analysis and experimentation this might turn out to be very good architecture, but at this point is hard to tell.
The impainting objective is similar to denoising - one tries to recover original information from either subset of pixels or from corrupted image. So this is quite similar to denoising autoencoders. It is actually exactly the same as the NADE algorithm, which can be equivalently trained by the same criterion (reconstructing one set of pixels from the other - quite obvious) instead of going sequentially through pixels. The architecture is an autoencoder but a more complicated one then standard single layer - it has two (or more) hidden layers and is recurrent. In addition there is the label prediction cost. The idea of a more complicated encoding function, including recurrence, is interesting but certainly not new and neither is combining unsupervised and supervised criterion in one criterion. However if future exploration shows that this particular architecture is a good way of learning features, or that is specifically trains well the deep bolzmann machines, or it is good for some other problems then this work can be very interesting. However as presented, it needs more experiments. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | i4E0iizbl6uCv | review | 1,367,449,740,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We have posted an update to the arXiv paper, containing new material that we will present at the workshop. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | _B-UB_2zNqJCO | review | 1,363,360,620,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer 55e7"
] | ICLR.cc/2013/conference | 2013 | review: Indeed I didn't notice this was a workshop paper, which then doesn't have to be as complete.
Standard way to train nade is go in the fixed order. However you can also choose a random for each input (it leads to worse likelihood though). This is then equivalent to blanking random m pixels and predicting remaining n-m where n is the input size and m is chosen randomly from 0..n-1 with appropriate weighting. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpainting-based objective function that facilitates second order optimization and line searches. | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | uu7m3uY-jKu9P | review | 1,363,234,680,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: The arXiv link now contains the second revision. |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of pre-existing databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present a family of matrix factorization models that predict affinity between database tuples and relations. We show that this achieves substantially higher accuracy than the traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms state-of-the-art distant supervision systems. | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | VVGqfOMv0jV23 | review | 1,362,170,580,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"anonymous reviewer 129c"
] | ICLR.cc/2013/conference | 2013 | title: review of Latent Relation Representations for Universal Schemas
review: The paper studies techniques for inferring a model of entities and relations capable of performing basic types of semantic inference (e.g., predicting if a specific relation holds for a given pair of entities). The models exploit different types of embeddings of entities and relations.
The topic of the paper is interesting and the contribution seems quite sufficient for a workshop paper. It should motivate an interesting discussion on how these models can be generalized to be applied to more complex datasets and semantic tasks (e.g., inferring these representation from natural language texts), and, in general, on representation induction methods for semantic tasks.
The only concern I have about this paper is that it does not seem to properly cite much of the previous work on related subjects. Though it mentions techniques for clustering semantically similar expressions, it seems to suggest that there has not been much work on inducing, e.g., subsumptions. However, there has been a lot of previous research on learning entailment (aka inference) rules (e.g., Chkolvsky and Pantel 2004; Berant et al, ACL 2011; Nakashole et al, ACL 2012). Even more importantly, some of the very related work on embedding relations is not mentioned, e.g., Bordes et al (AAAI 2011), or, very closely related, Jenatton et al (NIPS 2012). However, these omissions may be understandable given the short format of the paper.
Pros:
-- Interesting topics
-- Fairly convincing experimental results
Cons:
-- Previous work on embedding relations is not discussed. |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of pre-existing databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present a family of matrix factorization models that predict affinity between database tuples and relations. We show that this achieves substantially higher accuracy than the traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms state-of-the-art distant supervision systems. | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | 00Bom31A5XszS | review | 1,362,259,560,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"anonymous reviewer 2d4e"
] | ICLR.cc/2013/conference | 2013 | title: review of Latent Relation Representations for Universal Schemas
review: This paper presents a framework for open information extraction. This problem is usually tackled either via distant weak supervision from a knowledge base (providing structure and relational schemas) or in a totally unsupervised fashion (without any pre-defined schemas). The present approach aims at combining both trends with the introduction of universal schemas that can blend pre-defined ones from knowledge bases and uncertain ones extracted from free text.
This paper is very ambitious and interesting. The goal of bridging knowledges bases and text for information extraction is great, and this paper seems to go in the right direction. The experiments seem to show that mixing data sources is beneficial.
The idea of asymmetric implicature among relation is appealing but its implementation in the model remains unclear. How common is it that a tuple shares many relations? One can not tell anything for relations for which corresponding tuples are disjoint from the rest.
The main weakness of the system as it is presented here is that it relies on the fact that entities constituting tuples from the knowledge base (Freebase here) and tuples extracted from the text have been exactly matched beforehand. This is a huge limitation before any real application, because this involves solving a complex named entity recognition - word sense disambiguation - coreference resolution problem.
Is there any parameter sharing between latent feature vectors of entities and tuples (=pairs of entities)? And between relation vectors and neighbors weights?
Minor: the notation for the set of observed fact disappeared.
Pros:
- great motivation and research direction
- model and experiments are sound
Cons:
- lack of details
- many unanswered questions remain to apply it on real-world data. |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of pre-existing databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present a family of matrix factorization models that predict affinity between database tuples and relations. We show that this achieves substantially higher accuracy than the traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms state-of-the-art distant supervision systems. | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | HN_nN48xQYLxO | review | 1,363,302,420,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"Andrew McCallum"
] | ICLR.cc/2013/conference | 2013 | review: This is a test of a note to self. |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets. | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | PiVQP7pKuhiR5 | review | 1,363,392,540,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers, we have fixed all issues that you have reported in your kind review of the manuscript and uploaded a revision. |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets. | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | -YucDnyrcVDfe | review | 1,364,401,500,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet"
] | ICLR.cc/2013/conference | 2013 | review: we accept the poster presentation, thank you for organizing this! |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets. | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | NL-vN6tmpZNMh | review | 1,362,195,960,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer 5eb5"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: The paper presents an application of clustering-based feature learning ('CL') to image recognition tasks and tracking tasks for robotics. The basic system uses a clustering algorithm to train filters from small patches and then applies them convolutionally using a sum-abs-difference (instead of inner product) operation. This is followed with a fixed combination of processing stages (pooling, nonlinearity, normalization) and passed to a supervised learning algorithm. The approach is compared with 2-layer CNNs on image recognition benchmarks (StreetView house numbers, CIFAR10) and tracking (TLD dataset); in the last case it is shown that the method outperforms a 2-layer CNN from prior work. The speed of learning and test-time evaluation are compared as a measure of suitability for realtime use in robotics.
The main novelty here appears to be on a couple of points: (1) the particular choice of architecture (which is motivated at least in part by the desire to run in programmable hardware such as FPGAs), (2) documenting the speed advantage and positive tracking results in applications, both of which are worthwhile goals. Evaluation and training speed, as the authors note, are not well-documented in deep learning work and this is a problem for real-time applications like robotics.
Some questions I had about the content:
I did not follow how the 2nd layer of clustered features were trained. It looks like these were trained on single channels of the pooled feature responses?
Was the sum-abs-diff operation also used for the CNN?
One advantage of the clustering approach is that it is easier to train larger filter banks than with fine-tuned CNNs. Can the accuracy gap in recognition tasks be reduced by using more features? And at what cost to evaluation time?
Pros:
(1) Results are presented for a novel network architecture, documenting the speed and simplicity of clustering-based feature learning methods for vision tasks. It is hard to overstate how useful rapid training is for developing applications, so further results are welcome.
(2) Some discussion is included about important optimizations for hardware, but I would have liked more detail on this topic.
Cons:
(1) The architecture is somewhat unusual and it's not clear what motivates each processing stage.
(2) Though training is much faster, it's not clear to what degree the speed of training is useful for the robotics applications given. [As opposed to online evaluation time.]
(3) The extent of results is modest, and their bearing on actual work in robotics (or broader work in CV) is unclear. The single tracking result is interesting, but versus a baseline method.
Overall, I think the 'cons' point to the robotics use-case not being thoroughly justified; but there are some ideas in here that would be interesting on their own with more depth. |