forum_id
stringlengths 8
20
| forum_title
stringlengths 1
899
| forum_authors
sequencelengths 0
174
| forum_abstract
stringlengths 0
4.69k
| forum_keywords
sequencelengths 0
35
| forum_pdf_url
stringlengths 38
50
| forum_url
stringlengths 40
52
| note_id
stringlengths 8
20
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,737B
| note_replyto
stringlengths 4
20
| note_readers
sequencelengths 1
8
| note_signatures
sequencelengths 1
2
| venue
stringclasses 349
values | year
stringclasses 12
values | note_text
stringlengths 10
56.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | Deofes8a4Heux | review | 1,362,197,040,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"anonymous reviewer 8b0d"
] | ICLR.cc/2013/conference | 2013 | title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
review: The paper presents a method to learn invariant features by using temporal coherence. A set of linear pooling units are trained on top of a set of (pre-trained) features using what is effectively a linear auto-encoder with a penalty for changes over time (a 'slowness' penalty). Visualizations show that the learned weights of the pooling units tend to combine features for translated or slightly rotated edges (as expected for complex cells), and benchmark results show some improvement over hand-coded pooling units.
This is a fairly straight-forward idea that gives pleasing results nonetheless. The main attraction to the method proposed here is its simplicity and modularity: a linear auto-encoder and slowness penalty is very easy to implement and could be used in almost any pipeline. This is simultaneously my main concern about the method: it is significantly subsumed by prior work (though the very simple instance here might differ). For example, see the work of Zou et al. (NIPS 2012) which uses essentially the same training method with nonlinear pooling units, Mobahi et al. (ICML 2009), and work with 'slowness' criteria more generally. That said, considering the many algorithms that have been proposed to learn pooling regions and invariant features without video, the fact that an extremely simple instance like the one here can give reasonable results is worth emphasizing.
Pros:
(1) A very simple approach that appears to yield plausible invariant features and a modest bump over hand-built pooling in the unsupervised setting.
Cons:
(1) Only linear pooling units are considered. As a result they do not add much power beyond slight regularization of the linear SVM.
(2) Only single-layer networks are considered; results with deep layers might be very interesting.
(3) There is quite a lot of prior work with very similar ideas and implementations; hopefully these can be cited and discussed. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | A2auXgoqFvTyV | comment | 1,362,655,740,000 | 7N2E7oCO6yPiH | [
"everyone"
] | [
"Sainbayar Sukhbaatar"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for the detailed review. Those are good points, and we will consider them in our next revision. We also want to give some explanations.
About fixed cluster size:
- Yes. In topographic maps, clusters are not required to have the same size. We will fix this sentence in the next revision. We meant to say that those methods will require an additional mechanism to have adaptive (depends on the nature of its features) cluster sizes.
About topographic maps:
- Topographic maps may have their advantages, but I still think it puts artificial restrictions on clustering. For example, edge detectors have at least four dimensions: orientation, location and length. Therefore, an ideal clustering can be achieved by placing edge detectors in a four-dimensional map, and grouping nearby edge detectors. It will be difficult to map such a four-dimensional clustering into a two-dimensional plane.
- Our pooling method, on the other hand, allows any soft clustering. In addition, the clear advantage of our method over topographic maps is its modularity. The proposed method can be used with any feature learning algorithm, while topographic maps need to alter the feature learning process.
Performance comparison to topographic maps:
- Unfortunately, we could not find any reported result by those approaches on CIFAR10 (please refer to any papers that we may have missed), which is a widely used benchmark for image classification. In the future versions, we will try to implement those algorithms and apply them to CIFAR10.
About Mobahi et al.’s method:
- We didn’t compare our method to Mobahi et al.’s methods, because their method is not completely unsupervised. They combined supervised classification learning with unsupervised video coherence learning. I think those two cannot be separated, so their method cannot learn invariant features without labeled data. Our method, on the other hand, can be trained in a completely unsupervised way. The classification with labeled data is only used to show the effectiveness of pooling.
Comparison to spatial pooling:
- We compared our method to spatial pooling because it is the most widely used pooling method. Although spatial pooling is simple, it has an advantage of utilizing the spatial information. Since the most dominant variance in the lowest level is spatial shifts, we think that beating spatial pooling without using any spatial information is a notable result.
- It is true that our model has more parameters than spatial pooling, and it can be considered as an additional coding layer. Therefore, we may have to compare it to deeper networks. In future works, we will apply our pooling method to deep networks and compare it to other deep networks.
About the performance:
- The performance on CIFAR10 depends on three factors: feature learning, pooling and classification. With the same feature learning and classification setting as our experiments (autoencoder white 100 features + linear SVM), Adam Coates et al. reported 62% test accuracy in 'An Analysis of Single-Layer Networks in Unsupervised Feature Learning', which we improved to 69% by only changing the pooling step. Coates et al. showed the performance can be greatly improved by increasing the number of features. However, restricted by the computation time, we used only 100 features instead of 1600, which we think was the main reason of the poor performance. In future, we think we can greatly shorten the computation time of our method by combining auto-pooling with spatial pooling.
Novelty of our paper:
- Using similarity in neighboring frames is indeed very old idea. However, the main contribution of our paper is combination of the slow-change constraint with the low-information-loss constraint. With a very simple implementation, we showed that a pooling based on slowness can improve traditional spatial pooling, without changing the feature learning process. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | 2U4l21HEl7SVL | review | 1,361,924,340,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"Ian Goodfellow"
] | ICLR.cc/2013/conference | 2013 | review: First off, let me say thank you for citing my + my co-authors' paper on measuring invariances.
I have a few thoughts about invariance and temporal coherence that I hope you might find helpful.
Regarding invariance, I think that invariance is not such a great property on its own. What you really want is to disentangle the different factors of variation in the dataset. Invariance plays a role in this process, because if you want one feature to correspond only to one factor of variation, it must be invariant to all of the others. But it's really a very small part of the picture. For the purposes of our paper on measuring invariances, invariance was a good enough proxy for disentangling that we could use it test the hypothesis that deep learning systems become more invariant with depth. But I don't think invariance is a good enough property to serve as the main part of your objective function.
Regarding temporal coherence, I think a common mistake people make is to jump from the idea that features should be 'coherent' to the idea that features should be 'slow.' I think that useful features are spread over a wide spectrum of timescales. It's true that the fastest varying features are probably just noise. But the slowest varying features are probably not especially useful either. For example, if you put a camera on a streetcorner, the amount of sunlight in the scene would usually change slower than the identities of the people in the scene. I think probably the way to make progress with applications of temporal coherence is to study new ways of encouraging features to be coherent rather than just slow.
Some general suggestions on how to improve your results: You should read Adam Coates' ICML 2011 paper, which is about finding the best training algorithm and feature encoding method for single-layer architectures. I think if you use larger dictionaries (1600 instead of 100), train using OMP-1 or sparse coding instead of sparse autoencoders, and extract using T-encoding you will do much better and have a shot at beating state of the art, or at least beating Jia and Huang. Adam Coates' work shows that sparse autoencoders don't make very good feature extractors, and also that small dictionaries don't perform very well, so you're really hurting your numbers by using that setup as your feature extractor.
Finally, I think you're missing a few references. In particular, your approach is very closely related to Slow Feature Analysis, so you should cite Laurenz Wiskott and comment on the similarities. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | IFLJkDHcu-Ice | comment | 1,362,650,580,000 | lvwFsD4fResyH | [
"everyone"
] | [
"Sainbayar Sukhbaatar"
] | ICLR.cc/2013/conference | 2013 | reply: First of all, thank you for reviewing our paper. It was a valuable feedback. We will try to include mentioned papers in the next revision.
About the video dataset:
- We will include a detailed explanation in the next revision. In short, 40 short (2-5 minutes in length) videos are used in our experiments. We tried to collect videos containing the same objects as CIFAR10. However, images extracted from the videos were very different from CIFAR10 images. Many of them didn't include any object, and some only showed a small part of an object.
Comparison to Jia and Huang's method:
- We didn't compare our method to Jia and Huang's method, because they are fundamentally different methods. While Jia and Huang's method learns pooling regions in a supervised way, our method tries to learn pooling regions in an unsupervised way, which has many advantages.
- Although our method uses additional data, the data used for learning pooling regions was not labeled. On the other hand, Jia and Huang's method has an advantage of using labeled data, which produces pooling regions specialized for the classification task.
Comparison to state-of-art methods:
- It is true that our result on CIFAR10 is below the state-of-art. However, as shown by Adam Coates (ICML, 2011), classification results are largely influenced by the configuration of feature learning, especially by the number of features. Since the feature learning was not our research focus, we did little tweaking and optimization in the feature learning step. Also, restricted by the computation time, we didn't use large number of features (100 instead of 1600), which is likely the main reason of the low test accuracies.
- In the end, let us restate the main contribution of our paper. Our pooling method is novel because it learned pooling regions in an unsupervised way. In addition, it does not use explicit spatial information and it can be used with any pre-learned features. To the best our knowledge, there is no other pooling method that suffices those conditions. |
YBi6KFA7PfKo5 | Two SVDs produce more focal deep learning representations | [
"Hinrich Schuetze",
"Christian Scheible"
] | A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012). | [
"representations",
"efficient",
"property",
"svds",
"focal deep",
"key characteristic",
"work",
"deep learning",
"neural networks"
] | https://openreview.net/pdf?id=YBi6KFA7PfKo5 | https://openreview.net/forum?id=YBi6KFA7PfKo5 | aK4z5qBF7bEod | review | 1,363,717,680,000 | YBi6KFA7PfKo5 | [
"everyone"
] | [
"Hinrich Schuetze"
] | ICLR.cc/2013/conference | 2013 | review: Thanks for your comments! The suggestions seem all good and pertinent to us and (in case the paper should be accepted and assuming there is enough space) we will incorporate them when revising the paper. In particular: relate the new method to overview in Turney&Pantel, to kernel PCA and matrix factorization approaches; expand on discussion of focality, addressing concerns about broad applicability (if it's only used as a diagnostic, then it may not be a huge concern that it's somewhat unwieldy); discussion of Turian, Socher and Maas; more details and more thorough description of 1layer vs 2layer (we thought this was pretty directly analogous to single-layer learning vs two-layer deep learning, but will expand on this in a potentially revised version).
We also totally agree that ideally larger experiments on previous data sets should be done. We were hoping that more conceptual papers (introducing new methods and metrics) would be ok without immediate large experiments. We would try to conduct some larger experiments if the paper gets accepted, but cannot promise these would be ready for the conference. |
YBi6KFA7PfKo5 | Two SVDs produce more focal deep learning representations | [
"Hinrich Schuetze",
"Christian Scheible"
] | A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012). | [
"representations",
"efficient",
"property",
"svds",
"focal deep",
"key characteristic",
"work",
"deep learning",
"neural networks"
] | https://openreview.net/pdf?id=YBi6KFA7PfKo5 | https://openreview.net/forum?id=YBi6KFA7PfKo5 | VFwT2CLWfA2kU | review | 1,361,986,620,000 | YBi6KFA7PfKo5 | [
"everyone"
] | [
"anonymous reviewer 2448"
] | ICLR.cc/2013/conference | 2013 | title: review of Two SVDs produce more focal deep learning representations
review: This paper proposes to use two consecutive SVDs to produce a
continuous representation. This paper also introduces a property
called focality. They claim that this property may be important for
neural network: many classifiers cannot efficiently handle
conjunctions of several features unless they are explicitly given as
additional features; therefore a more focal representation of the
inputs can be a promising way to tackle this issue. This paper
opens a very important discussion thread and provides some interesting
starting points.
There are two contributions in this paper. First, the authors define
and motivate the property of focality for the representation of the
input. While the motivation is clear, its implementation is not
obvious. For instance, the description provided in the subsection
'Discriminative task' is hard to understand: what is really measured
how it is related to the focality property. This part of the paper
could be rephrased to be more explicit. The second contribution is the
representation derived by two consecutive SVDs. I would suggest to
provide a bit more of discussion about the related work like LSA, PCA,
or the denoising auto-encoder.
In the third paragraph of section 'Discussion', the authors may cite
the work of (Collobert and Weston) and (Socher) for instance. |
YBi6KFA7PfKo5 | Two SVDs produce more focal deep learning representations | [
"Hinrich Schuetze",
"Christian Scheible"
] | A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012). | [
"representations",
"efficient",
"property",
"svds",
"focal deep",
"key characteristic",
"work",
"deep learning",
"neural networks"
] | https://openreview.net/pdf?id=YBi6KFA7PfKo5 | https://openreview.net/forum?id=YBi6KFA7PfKo5 | vNpsUSMf3tNfx | comment | 1,363,717,200,000 | VFwT2CLWfA2kU | [
"everyone"
] | [
"Hinrich Schuetze"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your comments! If the paper is accepted, we will expand the description of the discrimination task and explain in more detail how it is related to focality (the idea is that a single hidden unit does well on the discrimination -- which is what focality is supposed to capture).
We will also expand the discussion of related methods (LSA, PCA, denoising auto-encoder) if the paper accepted (assuming there is space -- which should be the case).
We will cite&discuss (Collobert and Weston) and (Socher). Pointers to other relevant literature would be appreciated. |
YBi6KFA7PfKo5 | Two SVDs produce more focal deep learning representations | [
"Hinrich Schuetze",
"Christian Scheible"
] | A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for computing representations. In this paper, we propose an alternative method that is more efficient than prior work and produces representations that have a property we call focality -- a property we hypothesize to be important for neural network representations. The method consists of a simple application of two consecutive SVDs and is inspired by Anandkumar (2012). | [
"representations",
"efficient",
"property",
"svds",
"focal deep",
"key characteristic",
"work",
"deep learning",
"neural networks"
] | https://openreview.net/pdf?id=YBi6KFA7PfKo5 | https://openreview.net/forum?id=YBi6KFA7PfKo5 | 3wTuUWS9F_w4i | review | 1,362,188,640,000 | YBi6KFA7PfKo5 | [
"everyone"
] | [
"anonymous reviewer 4c9d"
] | ICLR.cc/2013/conference | 2013 | title: review of Two SVDs produce more focal deep learning representations
review: This paper introduces a novel method to induce word vector representations from a corpus of unlabeled text. The method relies upon 'stacking' singular value decomposition with an intermediate normalization nonlinearity. The authors propose 'focality' as a metric for quantifying the quality of a learned representation. Finally, control experiments on a small collection of sentence text demonstrates stacked SVD as producing more focal representations than a single SVD.
The method of stacked SVD is novel as far as I know, but could perhaps be generalized to use other nonlinearities between the two SVD layers than length normalization alone. As the authors acknowledge, SVD is a linear transform so the intermediate nonlinearity is important as to have the entire method not reduce to a single linear transform. There are a huge number of ways to use matrix factorization to induce word vectors, Turney & Pantel (JAIR 2010) give a nice review. I would like to better understand the proposed method in the context of the many alternatives to SVD factorization (e.g. kernel PCA etc.).
The introduced notion of focality might serve as a good metric for analysis of learned representation quality. It seems however that measuring focality is only possible with brute force experiments which could make it an unwieldy tool. Expanding on focality as a tool for representation evaluation, both in theory and practice, could strengthen this paper significantly.
The experiments use a small text corpus to demonstrate two SVDs as producing better representations than one. There is much room for improvement in the experiment section. In particular, there are several word representation benchmarks the authors could use to assess the quality of the proposed method relative to previous work:
- Turian et al (ACL 2010) compare several word representations and release benchmark code.
- Socher et al (EMNLP 2011) release a multi-dimensional sentiment analysis corpus and use neural nets to train word representations
- Maas et al (ACL 2011) release a large semi-supervised sentiment analysis corpus and directly compare SVD-obtained word representations with other models
The experiments given are a reasonable sanity check for the model and demonstration of the introduced focality metric. However, the paper would be greatly improved by comparing to previous work on at least one of the tasks in papers listed above.
The 1LAYER vs 2LAYER experiment is not clearly explained. Please expand on the difference in 1 vs 2 layers and the experimental result.
To summarize:
- Novel layer-wise SVD approach to inducing word vectors. Needs to be better explained in the context of matrix factorization alternatives
- Novel 'focality' metric which could serve as a tool for measuring learned representation quality. Metric needs more explanation / analysis.
- Experiments don't demonstrate the model relative to previous work. This is a major omission since many recent alternatives exist and comparison experiments should be straightforward with several public datasets exist
- Overall paper is fairly clear but could use some work |
9bFY3t2IJ19AC | Affinity Weighted Embedding | [
"Jason Weston",
"Ron Weiss",
"Hector Yee"
] | Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results. | [
"models",
"affinity",
"linear",
"wsabie",
"psi",
"successful",
"ranking",
"recommendation",
"annotation tasks",
"scalable"
] | https://openreview.net/pdf?id=9bFY3t2IJ19AC | https://openreview.net/forum?id=9bFY3t2IJ19AC | 9A_uTWCfuoTeF | review | 1,362,123,720,000 | 9bFY3t2IJ19AC | [
"everyone"
] | [
"anonymous reviewer 3e4d"
] | ICLR.cc/2013/conference | 2013 | title: review of Affinity Weighted Embedding
review: Affinity Weighted Embedding
Paper summary
This paper extends supervised embedding models by combining them multiplicatively,
i.e. f'(x,y) = G(x,y) f(x,y).
It considers two types of model, dot product in the *embedding* space and kernel density in the *embedding* space, where the kernel in the embedding space is restricted to
k((x,y),(x','y)) = k(x-x')k(y-y').
It proposes an iterative algorithm which alternates f and G parameter updates.
Review Summary
The paper is clear and reads well. The proposed solution is novel. Combining local kernels and linear kernel in different embedding space could leverage the best characteristic for each of them (locality for non-linear, easier training for linear). The experiments are convincing. I would suggest adding the results for G alone.
Review Details
Step (2), i.e. local kernel, is interesting on its own. Could you report its result? The optimization problem seems harder than step (1), could you quantify how much the pretraining with step (1) helps step (2)? A last related question, how do you initialize the parameters for step (3)? |
9bFY3t2IJ19AC | Affinity Weighted Embedding | [
"Jason Weston",
"Ron Weiss",
"Hector Yee"
] | Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results. | [
"models",
"affinity",
"linear",
"wsabie",
"psi",
"successful",
"ranking",
"recommendation",
"annotation tasks",
"scalable"
] | https://openreview.net/pdf?id=9bFY3t2IJ19AC | https://openreview.net/forum?id=9bFY3t2IJ19AC | X-2g4ZbGhE5Gf | review | 1,363,646,880,000 | 9bFY3t2IJ19AC | [
"everyone"
] | [
"Jason Weston"
] | ICLR.cc/2013/conference | 2013 | review: - The results of G alone are basically the 'k-Nearest Neighbor (Wsabie space)' results that are in the tables.
- We initialized the parameters of step 3 with the ones from step 1. Without this I think the results could be worse as you are losing a lot of the pairwise label comparisons from the training if G is sparse, so somehow because of the increased capacity, it is more possible to overfit. This may not be necessary if the dataset is big enough.
- Running time depends on the cost of computing G. In the imagenet experiments we did the full nearest neighbor computation (computed in parallel) which is obviously very costly (proportional to the training set size). However approximate kNN could also be considered as we said, amongst other choices of G. |
9bFY3t2IJ19AC | Affinity Weighted Embedding | [
"Jason Weston",
"Ron Weiss",
"Hector Yee"
] | Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which aim to provide improved performance while retaining many of the benefits of the existing class of embedding models. Our new approach works by iteratively learning a linear embedding model where the next iteration's features and labels are reweighted as a function of the previous iteration. We describe several variants of the family, and give some initial results. | [
"models",
"affinity",
"linear",
"wsabie",
"psi",
"successful",
"ranking",
"recommendation",
"annotation tasks",
"scalable"
] | https://openreview.net/pdf?id=9bFY3t2IJ19AC | https://openreview.net/forum?id=9bFY3t2IJ19AC | T5KWotfp6lot7 | review | 1,362,229,560,000 | 9bFY3t2IJ19AC | [
"everyone"
] | [
"anonymous reviewer 0248"
] | ICLR.cc/2013/conference | 2013 | title: review of Affinity Weighted Embedding
review: This work proposes a new nonlinear embedding model and applies it to a music annotation and image annotation task. Motivated by the fact that linear embedding models typically underfit on large datasets, the authors propose a nonlinear embedding model with greater capacity. This model weights examples in the embedding by their affinity in an initial linear embedding. The model achieves modest performance improvements on a music annotation task, and large performance improvements on ImageNet annotation. The ImageNet result achieves comparable performance to a very large convolutional net.
The model presented in the paper is novel, addresses an apparent need for a higher capacity model class, and achieves good performance on a very challenging problem.
The paper is clear but has a rushed feel, with some explanations being extremely terse. Although the details of the algorithms and experiments are specified, the intuition behind particular algorithmic design choices is not spelled out and the paper would be stronger if these were.
The experimental results are labeled 'preliminary,' and although they demonstrate good performance on ImageNet, they do not carefully investigate how different design choices impact performance. The ImageNet performance comparisons to related algorithms are hard to interpret because of a different train/testing split, and because a recent highly performing convolutional net was not considered (though the authors discuss its likely superior performance).
Finally, the presented experiments focus on performance on tasks of interest, but do not address the running time and storage cost of the algorithm. The authors mention the fact that their algorithm is more computationally and space-intensive than linear embedding; it would be useful to see running times (particularly in comparison to Dean et al. and Krizhevsky et al.) to give a more complete picture of the advantages of the algorithm. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | gD5ygpn3FZ9Tf | review | 1,362,079,980,000 | 11y_SldoumvZl | [
"everyone"
] | [
"anonymous reviewer c82a"
] | ICLR.cc/2013/conference | 2013 | title: review of Factorized Topic Models
review: * A brief summary of the paper's contributions, in the context of prior work.
This paper suggests an improvement over the LDA topic model with class labels of Fei-Fei and Perona [6], which consists in the incorporation of a prior that encourages the class conditional topic distributions to either be specific to a particular class or to be 'shared' across classes. Experiments suggest that this change to the original LDA model of [6] yields topics that are sharply divided into class-specified or shared topics and that are jointly more useful as a discriminative latent representation.
* An assessment of novelty and quality.
I like the motivation behind this work: designing models that explicitly try to separate the class-specific and class-invariant factors of variation is certainly an important goal and makes for a particularly appropriate topic at a conference on learning representations.
The novelty behind this paper is not great, since it adds a small component to a known model. But I wouldn't see this as a strong reason for not accepting this paper. There are, however, other issues which are more serious.
First, I find the mathematical description of the model to be imprecise. The main contribution of this work lies in the specification of a class-dependent prior over topics. It corresponds to the product of the regular prior from [6] and a new prior factor p( heta | kappa), which as far as I know is not explicitly defined anywhere. The authors only describe how this prior affect learning, but since no explicit definition of p( heta | kappa) is given, we can't verify that the learning algorithm is consistent with the definition of the prior. Given that the learning algorithm is somewhat complicated, involving some annealing process, I think a proper, explicit definition of the model is important, since it can't be derived easily from the learning algorithm.
I also find confusing that the authors refer to h(k) (Eq. 3) as an entropy. To be an entropy, it would need to involve a sum over k, not over c. Even the plate graphic representation of the new model is hard to understand, since heta is present in two separate plates (over M).
Finally, since there are other alternatives than [6] to supervised training of an LDA topic model, I think a comparison with these other alternatives would be in order. In particular, I'm thinking of the two following alternatives:
Supervised Topic Models, by Blei and McAuliffe, 2007
DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification, by Lacoste-Julien, Sha and Jordan, 2008
I think these alternatives should at least be discussed, and one should probably be added as a baseline in the experiments.
As a side comment (and not as a strong criticism of this paper), I'd like to add that I don't think the state of the art for scene classification (or object recognition in general) is actually based on LDA. My understanding is that approaches based on sparse coding + max pooling + linear SVM are better. I still think it's OK for some work to focus on improving a particular class of models. But at one point, perhaps a comparison to these other approaches should be considered.
* A list of pros and cons (reasons to accept/reject).
|Pros|
- attacks an important problem, that of discovering and separating the factors of variation of a data distribution that are either class-dependent or class-shared, in the context of a topic model
|Cons|
- somewhat incremental work
- description of the model is not enough detailed
- no comparison with alternative supervised LDA models |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | ADCLANJlZFDlw | comment | 1,362,753,420,000 | gD5ygpn3FZ9Tf | [
"everyone"
] | [
"Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom"
] | ICLR.cc/2013/conference | 2013 | reply: We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail.
From reading the reviews, we realize that the main contribution of the paper seems to have been obscured in the presentation - for example, due to a formulation in the beginning of the abstract (which now is changed). We do not propose a new topic model, but rather introduce a method for latent factorization in topic models. The method that we propose is general and can be adopted to many different topic models.
Several tasks benefit from a factorized topic space; classification - the one we use to exemplify with in the paper - is just one. Factorized models produce interpretable latent spaces, which has been exploited in continous models for synthesis, as in [A], or for ambiguity modelling or domain transfer, as in [5] (Ek et al.), [B]. We believe the benefits of this transfer to topic models as well.
It would be very interesting to evaluate the benefit of a factorized topic space for a much larger range of topic models than what we do in this paper - this is beyond the scope of this paper but will definitely be pursued in a future journal version.
In a revised version of the paper, which is now uploaded to ArXiv, we have however added results from the SLDA model of Blei and McAuliffe, as a second baseline in the experiments, as suggested by reviewers c82a and fda8. The factorized LDA consistently performs better than both the regular LDA and SLDA.
To stress the focus on factorization rather than a specific classification application, we have furthermore added an experiment with video classification. Other changes, as described below, are also included in this new paper version.
New references (included in the new version):
[A] A. C. Damianou, C. H. Ek, M. Titsias, and N. D. Lawrence, “Manifold Relevance Determination,” International Conference on Machine Learning, 2012.
[B] R. Navaratnam, A. W. Fitzgibbon, and R. Cipolla, “The joint manifold model for semi-supervised multi-valued regression,” IEEE International Conference on Computer Vision, 2007.
Reviewer c82a:
We agree with reviewer c82a that we used the word entropy in a rather sloppy manner. We have strived to make the distinction clear in the revised version.
In Figure 1(b), theta in the main plate is connected with another theta outside, since we use all the topics in theta to compute the entropy-like information measure for each topic theta_m. In this, we adopt a graphical notation similar to [9] (Jia et al.). This is explained more thoroughly in the revised version of the paper.
Moreover, p(theta | kappa) is proportional to F(k) in Equation (8). In the revised version of the paper, we explicitly state the form of the proposed prior.
Finally, as reviewer c82a clearly states, topic models do not produce state-of-the-art results for scene classification (however, they do produce state-of-the-art results in other domains, such as text). The motivation for using the current classification tasks is that we find that they provide a nice intuition into why one would want a factorized representation, which is able to model separately the 'important information' (class-dependent) and the 'unimportant information/noise' (class-independent).
Reviewer 232f:
As reviewer 232f correctly states, the class-dependent and the class-independent topics jointly encode the variations in the data. The argument is not, as reviewer 232f suggests, to throw the class-independent topics away - they are important for explaining parts of the data variation. This is not suggested anywhere in the paper. There are many motivations for learning a factorization. In the example application used in the paper, classification, the class dependent topics are the important ones. However, in a transfer learning scenario, the class-independent information is highly relevant. The manner in which factorization is used is highly application and domain specific; in this paper we exemplify one use for classification.
As reviewer 232f points out, using a feature that has been created for discriminative methods in a generative framework might not be particularly sensible. Our motivation for still taking this approach is to make a fair comparison to other topic models, for example, [6] (Fei-Fei and Perona).
We have replaced the term 'view' with 'modality' in the revised version of the paper, and also clarified the relation of our factorization method to the multi-modality methods cited in Section 2. In the literature on factorized latent variable models the word 'view' is predominantly used, but we think that 'modality' is clearer here.
Reviewer fda8:
As reviewer fda8 points out, we could achieve the same effect by using a beta distribution instead of A in Equation (7). However, it would still require a entropy-like measurement to steer the beta distribution so as to achieve the desired factorization.
As described above, we have added results using SLDA, and show that the factorized LDA consistently performs better than both regular LDA and SLDA. However, we did not have time to implement other variants suggested by reviewer fda8 - this is definitely something which is interesting to do for a journal version. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | rr6RmiA9Hhs9i | review | 1,362,457,800,000 | 11y_SldoumvZl | [
"everyone"
] | [
"anonymous reviewer fda8"
] | ICLR.cc/2013/conference | 2013 | title: review of Factorized Topic Models
review: This paper introduces a new prior for topics in LDA to disentangle general variance and class specific variance.
The other reviews already mentioned the lack of novelty and some missing descriptions. Concretely, the definition of p( heta | kappa), which is central to this paper, is not clear. Instead of defining these A(k) functions in figure 7, couldn't you just use a beta distribution as a prior?
In order to publish yet another variant of LDA, more comparisons are needed to the many other LDA-based models already published.
In particular, this paper tackles common issues that have been addresses many times by other authors. In order to have a convincing argument for introducing another LDA-like model, some form of comparison would be needed to a subset of these:
- the supervised topic models of Blei et al.,
- DiscLDA from Lacoste,
- partially labeled LDA from Ramage et al.,
- Factorial LDA: Sparse Multi-Dimensional Text Models by Paul and Dredze,
- Modeling General and Specific Aspects of Documents with a Probabilistic Topic Model by Chemudugunta et al.
- sparsity inducing topic models of Chong Wang et al.
Unless the current model outperforms at least a couple of the above related models it is hard to argue for acceptance. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | 8nXtnZf5sU-bd | comment | 1,363,382,280,000 | eeCgjoYcgmDco | [
"everyone"
] | [
"Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom"
] | ICLR.cc/2013/conference | 2013 | reply: Once again, thanks to reviewer c82a for very helpful comments. We agree that the statement regarding connection between the prior and F(k) was not correct. The parameter kappa should not be considered as a prior in the model, instead it is used as a implementation specific parameter. We have now reorganized the paper so that the model section contains a definition of the factorizing prior p( heta) (Eq (5)), which we believe will make things a lot clearer. Furthermore, we have added an appendix B which gives details about the training of the factorized LDA using Gibbs sampling explaining the role of kappa. The definition of F(k) has been moved in appendix B, and its relation to the prior (Eq (5)) has been made clearer.
The definition of the measure H of class-specificity (Eq (3)) has been made clearer by improving the notation.
The new version of the paper has been uploaded in arXiv, which will be public visible at Mon 18, March, 2013, 00:00:00 GMT. Thanks a lot for your time in advance. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | YYiHlnPjU5YVO | review | 1,363,623,420,000 | 11y_SldoumvZl | [
"everyone"
] | [
"Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers,
the new version of the paper, addressing all the changes in our comments, is public visible now in arXiv.
Thanks in advance for your time. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | eeCgjoYcgmDco | comment | 1,363,139,160,000 | ADCLANJlZFDlw | [
"everyone"
] | [
"anonymous reviewer c82a"
] | ICLR.cc/2013/conference | 2013 | reply: The additional comparison with SLDA is a good step in the right direction and certainly improves my personal appreciation of this paper.
Unfortunately, I still cant vouch for the validity of the learning algorithm. First, I'm now even more confused as to what the prior actually is. Indeed, the prior is stated to be proportional to F(k), which depends on a certain Delta A. Also, Delta A, as I understand it, is computed from a comparison between two topic assignments (sampled during Gibbs sampling). However, p( heta | kappa) should not depend on topic assignments, since it is a *prior* over the parameters of the topic assignments distribution (p(z| heta)). I'm under the impression that the authors are confusing the prior and the posterior over heta here (the latter being involved in the process of Gibbs sampling, which would indeed involve comparisons between topic assignments).
I also still don't find it obvious how the authors derived their learning algorithm from the proposed novel prior. What is the training objective function exactly? How is Gibbs sampling involved in the gradient descent optimization on that objective? How does one derive the specific sampling process described in this paper from the general procedure of Gibbs sampling? These might seem by the authors like questions with obvious answers, but their answer would help a lot for the reader to understand the learning algorithm and be able to reimplement it. |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | InujBpA-6qILy | review | 1,362,214,440,000 | 11y_SldoumvZl | [
"everyone"
] | [
"anonymous reviewer 232f"
] | ICLR.cc/2013/conference | 2013 | title: review of Factorized Topic Models
review: This paper presents an extension of Latent Dirichlet Allocation (LDA) which explicitly factors data into a structured noise part (varation shared among classes) and a signal part (variation within a specific class). The model is shown to outperform a baseline of LDA with class labels (Fei-Fei and Perona). The authors also show that the model can extract class-specific and class-shared variability by examining the learned topics.
The authors show that the new model can outperform standard LDA on classification tasks, however, it's not clear to me why one would necessarily use an LDA-based topic model (or topic models in general) if they're just interested in classification. In the introduction, the paper motivates the use of generative models (all well known reasons - learning from sparse data, handling missing observations, and providing estimates of data uncertainty, etc.) But none of these situations are explored in the experiments. So in the end, the paper shows that a model that it not really necessarily good at classification being improved but not to the point where it's better than discriminative models and not in a context of where a generative model would really be helpful.
Positive points:
* The method seems sound in its motivation and construction
* The model is shown to work on different modalities (text and images)
* The model outperforms classical LDA for classification
Negative points
* As per my comments above, there may be situations in which one would want to use this type of model for classification, but they haven't been explored in this paper
* The argument that the model produces sparser topic representations could be more convincing: in 4.3, the paper claims that the class-specific topic space effectively used for classification consists of 8 topics, where 12 topics are devoted to modeling structured noise; however the 12 noise topics still form part of the representation. Is the argument that the non-class topics would be thrown away after learning while the class-specific topics are retained and/or stored?
Specific comments:
In the third paragraph of the introduction, I'm not sure about choosing SIFT as the feature extraction step of this example of inference in generative models. I can't think of examples where SIFT has been used as part of a generative model -- it seems to be a classical feature extraction step for discriminative methods. Therefore, why not use an example of features that are learned generatively for this example?
In Section 2, the paper begins to talk about 'views' without really defining what is meant by a 'view'. Earlier, the paper discussed 'class-dependent' and 'class-independent' variance, and now 'view-dependent' and 'view-independent' variance - but they are not the same thing (though connected, as in the next paragraph the paper describes providing class labels as an additional 'view'). Perhaps it's best just to define up-front what's meant by a 'view'. If 'views' are just being used as a generalization of classes here in describing related work, just state that. Maybe the generalization for purposes of this discussion is not even necessary and the concept of 'view' just adds confusion.
Section 2: 'only the private class topics contain the relevant _____ for class inference'.
End of section 3: 'we associate *low*-entropy topics as class-dependent while *low*-entropy topics are considered as independent' ? (change second low to high) |
11y_SldoumvZl | Factorized Topic Models | [
"Cheng Zhang",
"Carl Henrik Ek",
"Hedvig Kjellstrom"
] | In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. The approach allows for a more efficient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for both image and text classification. | [
"topic models",
"factorized representation",
"variance",
"new type",
"latent topic model",
"supervision",
"observed data",
"structured parameterization",
"classes",
"private"
] | https://openreview.net/pdf?id=11y_SldoumvZl | https://openreview.net/forum?id=11y_SldoumvZl | LpoA5MF9bm520 | review | 1,362,753,660,000 | 11y_SldoumvZl | [
"everyone"
] | [
"Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom"
] | ICLR.cc/2013/conference | 2013 | review: We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail.
From reading the reviews, we realize that the main contribution of the paper seems to have been obscured in the presentation - for example, due to a formulation in the beginning of the abstract (which now is changed). We do not propose a new topic model, but rather introduce a method for latent factorization in topic models. The method that we propose is general and can be adopted to many different topic models.
Several tasks benefit from a factorized topic space; classification - the one we use to exemplify with in the paper - is just one. Factorized models produce interpretable latent spaces, which has been exploited in continous models for synthesis, as in [A], or for ambiguity modelling or domain transfer, as in [5] (Ek et al.), [B]. We believe the benefits of this transfer to topic models as well.
It would be very interesting to evaluate the benefit of a factorized topic space for a much larger range of topic models than what we do in this paper - this is beyond the scope of this paper but will definitely be pursued in a future journal version.
In a revised version of the paper, which is now uploaded to ArXiv, we have however added results from the SLDA model of Blei and McAuliffe, as a second baseline in the experiments, as suggested by reviewers c82a and fda8. The factorized LDA consistently performs better than both the regular LDA and SLDA.
To stress the focus on factorization rather than a specific classification application, we have furthermore added an experiment with video classification. Other changes, as described below, are also included in this new paper version.
New references (included in the new version):
[A] A. C. Damianou, C. H. Ek, M. Titsias, and N. D. Lawrence, “Manifold Relevance Determination,” International Conference on Machine Learning, 2012.
[B] R. Navaratnam, A. W. Fitzgibbon, and R. Cipolla, “The joint manifold model for semi-supervised multi-valued regression,” IEEE International Conference on Computer Vision, 2007.
Reviewer c82a:
We agree with reviewer c82a that we used the word entropy in a rather sloppy manner. We have strived to make the distinction clear in the revised version.
In Figure 1(b), theta in the main plate is connected with another theta outside, since we use all the topics in theta to compute the entropy-like information measure for each topic theta_m. In this, we adopt a graphical notation similar to [9] (Jia et al.). This is explained more thoroughly in the revised version of the paper.
Moreover, p(theta | kappa) is proportional to F(k) in Equation (8). In the revised version of the paper, we explicitly state the form of the proposed prior.
Finally, as reviewer c82a clearly states, topic models do not produce state-of-the-art results for scene classification (however, they do produce state-of-the-art results in other domains, such as text). The motivation for using the current classification tasks is that we find that they provide a nice intuition into why one would want a factorized representation, which is able to model separately the 'important information' (class-dependent) and the 'unimportant information/noise' (class-independent).
Reviewer 232f:
As reviewer 232f correctly states, the class-dependent and the class-independent topics jointly encode the variations in the data. The argument is not, as reviewer 232f suggests, to throw the class-independent topics away - they are important for explaining parts of the data variation. This is not suggested anywhere in the paper. There are many motivations for learning a factorization. In the example application used in the paper, classification, the class dependent topics are the important ones. However, in a transfer learning scenario, the class-independent information is highly relevant. The manner in which factorization is used is highly application and domain specific; in this paper we exemplify one use for classification.
As reviewer 232f points out, using a feature that has been created for discriminative methods in a generative framework might not be particularly sensible. Our motivation for still taking this approach is to make a fair comparison to other topic models, for example, [6] (Fei-Fei and Perona).
We have replaced the term 'view' with 'modality' in the revised version of the paper, and also clarified the relation of our factorization method to the multi-modality methods cited in Section 2. In the literature on factorized latent variable models the word 'view' is predominantly used, but we think that 'modality' is clearer here.
Reviewer fda8:
As reviewer fda8 points out, we could achieve the same effect by using a beta distribution instead of A in Equation (7). However, it would still require a entropy-like measurement to steer the beta distribution so as to achieve the desired factorization.
As described above, we have added results using SLDA, and show that the factorized LDA consistently performs better than both regular LDA and SLDA. However, we did not have time to implement other variants suggested by reviewer fda8 - this is definitely something which is interesting to do for a journal version. |
bI58OFtQlLOQ7 | Deep Learning for Detecting Robotic Grasps | [
"Ian Lenz",
"Honglak Lee",
"Ashutosh Saxena"
] | In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore faster to run and makes more mistakes. The second network has more features and therefore gives better detections. Unlike previous works that need to design these features manually, deep learning gives us flexibility in designing such multi-step cascaded detectors. | [
"deep learning",
"robotic grasps",
"features",
"work",
"problem",
"view",
"scene",
"objects",
"cascaded structure"
] | https://openreview.net/pdf?id=bI58OFtQlLOQ7 | https://openreview.net/forum?id=bI58OFtQlLOQ7 | Fsg-G38UWSlUP | review | 1,362,414,180,000 | bI58OFtQlLOQ7 | [
"everyone"
] | [
"anonymous reviewer cf06"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Learning for Detecting Robotic Grasps
review: This paper uses a two-pass detection mechanism with sparse autoencoders for robotic grasp detection, a new application of deep learning. The methods used are fairly standard by now (two pass and autoencoders), so the main novelty of the paper is its nice application. It shows good results, which are well presented and hold the promise of future extensions in this area.
The main issue I have with the paper is that it seems 'unfinished'; text wise I would have liked to see a proper conclusion and some more details on training; regarding its methods, I have the feeling this is work in its early stages.
pros:
- novel and successful application
- expert implementation of deep learning
cons:
- 'unfinished' early work
- this is an application paper, not a novel method (admittedly not necessarily a 'con') |
bI58OFtQlLOQ7 | Deep Learning for Detecting Robotic Grasps | [
"Ian Lenz",
"Honglak Lee",
"Ashutosh Saxena"
] | In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore faster to run and makes more mistakes. The second network has more features and therefore gives better detections. Unlike previous works that need to design these features manually, deep learning gives us flexibility in designing such multi-step cascaded detectors. | [
"deep learning",
"robotic grasps",
"features",
"work",
"problem",
"view",
"scene",
"objects",
"cascaded structure"
] | https://openreview.net/pdf?id=bI58OFtQlLOQ7 | https://openreview.net/forum?id=bI58OFtQlLOQ7 | Sl9E4V1iE8lfU | review | 1,362,192,180,000 | bI58OFtQlLOQ7 | [
"everyone"
] | [
"anonymous reviewer b096"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Learning for Detecting Robotic Grasps
review: Summary: this paper uses the common 2-step procedure to first eliminate most of unlikely detection windows (high recall), then use a network with higher capacity for better discrimination (high precision). Deep learning (in the unsupervised sense) helps having features optimized for each of these 2 different tasks, adapt them for different situations (different robotics grippers) and beat hand-designed features for detection of graspable areas, using a mixture of inputs (depth + rgb + xyz).
Novelty: deep learning for detection is not as uncommon as the authors suggest (pedestrians detection by [4] and imagenet 2012 detection challenge by Krizhevsky), however its application to robotics grasping detection is indeed novel. And detecting rotations (optimal grasping detection) while not completely novel is not extremely common.
Quality: the experiments are well conducted (e.g. proper 5-fold cross validation).
Pros:
- Deep learning successfully demonstrated in a new domain.
- Goes beyond the simpler task of classification.
- Unsupervised learning itself clearly learns interesting 3D features of graspable areas versus non-graspable ones.
- Demonstrates superior results to hand-coded features and automatic adaptability to different grippers.
- The 2-pass shows improvements in quality and ~2x speedup.
Cons:
- Even though networks are fairly small, the system is still far from realtime. Maybe explaining what the current bottlenecks are and further work would be interesting. Maybe you want to use convolutional networks to speed-up detection (no need to recompute each window's features, a lot of them are shared in a detection setting). |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | JJQpYH2mRDJmM | review | 1,363,989,120,000 | -AIqBI4_qZAQ1 | [
"everyone"
] | [
"Luis Gonzalo Sánchez"
] | ICLR.cc/2013/conference | 2013 | review: The newest version of the paper will appear on arXiv by Monday March 25th.
In the mean time the paper can be seen at the following link:
https://docs.google.com/file/d/0B6IHvj9GXU3dMk1IeUNfUEpqSmc/edit?usp=sharing |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | J04ah1kBas0qR | review | 1,362,229,800,000 | -AIqBI4_qZAQ1 | [
"everyone"
] | [
"anonymous reviewer 4ccd"
] | ICLR.cc/2013/conference | 2013 | title: review of Information Theoretic Learning with Infinitely Divisible Kernels
review: This paper introduces new entropy-like quantities on positive semi definite matrices. These quantities can be directly calculated from the Gram matrix of the data, and they do not require density estimation. This is an attractive property, because density estimation can be difficult in many cases. Based on this theory, the authors propose a supervised metric learning algorithm which achieves competitive results.
Pros: The problem studied in the paper is interesting and important. The empirical results are promising.
Cons:
i) Although I believe that there are many great ideas in the paper, in my opinion the presentation of the paper needs significant improvement. It is very difficult to asses what exactly the novel contributions are in the paper, because the authors didn't separate their new results well enough from the existing results. For example, Section 3 is about infinitely divisible matrices, but I don't know what exactly the new results are in this section.
ii) The introduction and motivation could be improved as well. The main message and its importance is a bit vague to me. I recommend revising Section 1. The main motivation to design new entropy like quantities was that density estimation is difficult and we might need lots of sample points to get satisfactory results. That's true that the proposed approach doesn't require density estimation, but it is still not clear if the proposed approach works better than those algorithms that use density estimators.
The empirical results seem very promising, so maybe I would emphasize them more.
iii) There are a few places in the text where the presented idea is simple, but it is presented in a complicated way and therefore it is difficult to understand. For example Section 4.1 and 4.2 seem more difficult than they should be. The definition of function F is not clear either.
iv) There are a few typos and grammatical mistakes in the paper that also need to be fixed before publication.
For example, on Page 1:
Page 1: know as --> known as
Page 1: jlearning |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | suhMsqNkdKs6R | comment | 1,363,799,700,000 | 5pA7ERXu7H5uQ | [
"everyone"
] | [
"Luis Gonzalo Sánchez"
] | ICLR.cc/2013/conference | 2013 | reply: This is the same comment from below, we just realized that this is the reply button for your comments.
Dear reviewer, we appreciate the comments and the effort put into reviewing our work. We believe you have made a very valid point by asking us about the role of alpha. The order of the matrix entropy acts as an Lp norm on the eigenvalues of the Gram matrix. The larger the entropy the more emphasis on the largest eigenvalues. This behaviour translates onto our metric learning algorithm as going from multimodal very flexible class structure towards a unimodal more constrained class structure as we increase alpha. We include an example that illustrates this behaviour. With respect to HSIC, it is true that for alpha =2 the trace of K^2 shares some resemblance with the criterion. However there are several differences that makes the connection hard to establish. First, when dealing with covariance operation it has been already assumed that the mean elements have been removed (covariance operator is centred) . As we see form the introductory motivation the second order entropy is the norm of the mean vector in the RKHS. If the mean is removed this vector has zero norm. We also require our Gram matrix to have non-negative entries so that our information theoretic interpretation makes sense. We have now included comparisons with NCA. |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | ssJmfOuxKafV5 | review | 1,363,775,340,000 | -AIqBI4_qZAQ1 | [
"everyone"
] | [
"Luis Gonzalo Sánchez"
] | ICLR.cc/2013/conference | 2013 | review: The new version of the paper can be accessed through
https://docs.google.com/file/d/0B6IHvj9GXU3dekxXMHZVdmphTXc/edit?usp=sharing
until it is updated in arXiv |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | cUCwU-yxtoURe | review | 1,362,176,820,000 | -AIqBI4_qZAQ1 | [
"everyone"
] | [
"anonymous reviewer 2169"
] | ICLR.cc/2013/conference | 2013 | title: review of Information Theoretic Learning with Infinitely Divisible Kernels
review: The paper introduces a new approach to supervised metric learning. The
setting is somewhat similar to the information-theoretic approach of
Davis et al. (2007). The main difference is that here the
parameterized Mahalanobis distance is tuned by optimizing a new
information-theoretical criterion, based on a matrix functional
inspired by Renyi's entropy. Eqs. (5), (11) and (19) and their
explanations are basically enough to grasp the basic idea. In order to
reach the above goal, several mathematical technicalities are
necessary and well developed in the paper. A key tool are infinitely
divisible matrices.
+ New criterion for information-theoretic learning
+ The mathematical development is sound
+/- The Renyi-inspired functional could be useful in other contexts
(but details remain unanswered in the paper)
- The presentation is very technical and goes bottom-up making it
difficult to get the 'big picture' (which is not too complicated)
until Section 4 (also it's not immediately clear which parts
convey the essential message of the paper and which parts are just
technical details, for example Section 2.1 could be safely moved
into an appendix mentioning the result when needed).
- Experiments show that the method works. I think this is almost
enough for a conference paper. Still, it would improve the paper
to see a clear direct comparison between this approach and
KL-divergence where the advantages outlined in the conclusions
(quote: 'The proposed quantities do not assume that the density of
the data has been estimated, which avoids the difficulties related
to it.') are really appreciated. Perhaps an experiment with
artificial data could be enough to complete this paper but real
world applications would be nice to see in the future.
Minor:
Section 2. Some undefined symbols: $M_n$, $sigma(A)$ (spectrum of A?)
Page 3: I think you mean
'where $n_1$ of the entries are 1$ -> $where $n_1$ of the entries of $mathbf{1}$ are 1$ |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | nVC7VhbpFDnlL | comment | 1,363,773,600,000 | J04ah1kBas0qR | [
"everyone"
] | [
"Luis Gonzalo Sánchez"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks again for the good comments. We have worked hard on improving the presentation of the results.
With regard to your cons:
i) We improve the presentation of the ideas by highlighting what are the contributions and why they are relevant. In section 3, where there was no clear delineation between what is know and what is new, we put our effort on explaining the reason in including some well known results since they help understanding the role of the infinite divisible kernels in computing the proposed information theoretic quantities. We provide both a graphical and textual explanation of the main ideas that can be extracted from section 3.
ii) Section 1 was revisited and redistributed to facilitate grasping the main ideas and contributions. We tried to emphasize more on the results obtained for the application to metric learning. We also motivate the proposed quantities from the point of view of computing high order descriptors of the data based on positive definite kernels.
iii) Section 4 was modified to convey the main result, which is the computation of the gradient of the proposed entropy, in a much simpler way. The technical details were moved to an appendix.
iv) we took care of the typos that have been pointed out by the reviewers as well as other we found during the paper improvement. |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | 5pA7ERXu7H5uQ | review | 1,362,276,900,000 | -AIqBI4_qZAQ1 | [
"everyone"
] | [
"anonymous reviewer 5093"
] | ICLR.cc/2013/conference | 2013 | title: review of Information Theoretic Learning with Infinitely Divisible Kernels
review: This paper proposes a new type of information measure for positive semidefinte matrices, which is essentially the logarithm of the sum of powers of eigenvalues. Several entropy-like properties are shown based on properties of spectral functions. A notion of joint entropy is then defined through Hadamard products, which leads to conditional entropies.
The newly defined conditional entropy is finally applied to metric learning, leading naturally to a gradient descent procedure. Experiments show that the performance of the new procedure exceeds the state of the art (e.g., LMNN).
I did not understand the part on infinitely divisible matrices and why Theorem 3.1 leads to a link with maximum entropy.
To the best of my knowledge, the ideas proposed in the paper are novel. I like the approach of trying to defining measures that have similar properties than entropies without the computational burden of computing densities. However, I would have like more discussion of the effect of alpha (e.g., why alpha=1.01 in experiments? does it make a big difference to change alpha? what does it corresponds to for alpha =2, in particular in relation fo HSIC?).
Pros:
-New information measure with attractive properties
-Simple algorithm for metric learning
Cons:
-Lack of comparison with NCA which is another non-convex approach (J. Goldberger, S. Roweis, G. Hinton, R. Salakhutdinov. (2005) Neighbourhood Component Analysis. Advances in Neural Information Processing Systems. 17, 513-520.
-Too little discussion on the choice of alpha |
-AIqBI4_qZAQ1 | Information Theoretic Learning with Infinitely Divisible Kernels | [
"Luis Gonzalo Sánchez",
"Jose C. Principe"
] | In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbert spaces. We show how analogues to quantities such as conditional entropy can be defined, enabling solutions to learning problems. In particular, we derive a supervised metric learning algorithm with very competitive results. | [
"information theoretic learning",
"functional",
"divisible kernels",
"framework",
"divisible matrices",
"positive definite matrices",
"renyi",
"entropy definition",
"key properties"
] | https://openreview.net/pdf?id=-AIqBI4_qZAQ1 | https://openreview.net/forum?id=-AIqBI4_qZAQ1 | hhNRhrYspih_x | comment | 1,363,772,280,000 | cUCwU-yxtoURe | [
"everyone"
] | [
"Luis Gonzalo Sánchez"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for the comments. We really appreciate the time you put into reviewing our paper. I agree that in the original presentation many of the main points and contributions of the paper where hard to grasp. In the new version, we have made our contributions explicit. and some of the technical exposition was modified to avoid getting lost in t details. We emphasized on the equations to and provide better motivations for the mathematical developments of each section. We agree that some of the details could be safely moved to an appendix, without compromising the relevant results. |
KKZ-FeUj-9kjY | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | [
"Xanadu Halkias",
"Sébastien PARIS",
"Herve Glotin"
] | Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage. | [
"dbn",
"deep belief networks",
"digit recognition",
"sparse constraints",
"sparse penalty",
"dbns",
"approximate accuracy rates"
] | https://openreview.net/pdf?id=KKZ-FeUj-9kjY | https://openreview.net/forum?id=KKZ-FeUj-9kjY | ttT0L-IGxpbuw | review | 1,362,153,000,000 | KKZ-FeUj-9kjY | [
"everyone"
] | [
"anonymous reviewer 0136"
] | ICLR.cc/2013/conference | 2013 | title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
review: The paper proposes a mixed norm penalty for regularizing RBMs and DBNs. The work extends previous work on sparse RBMs and DBNs and extends the work of Luo et al. (2011) on sparse group RBMs (and DBMs) to deep belief nets. The method is tested on several datasets and no significant improvement is reported compared to the original DBN.
The paper has limited novelty as the proposed mixed-norm has already been investigated in details by Luo et al. (2011) on a RBM and DBM. Also, the original contribution is not properly referenced as it appears only in the references section but not in the text.
In the caption of Figure 1, it is said that hidden units will overrepresent vs. underrepresent the data. It is unclear what is exactly meant. Can this be quantified? Is this overrepresentation/underrepresentation problem intrinsic to the investigated mixed norms or is it more a question of choosing the right hyperparameters? The authors use a fixed regularization parameter for all investigated variants of the DBN. Could that be the reason for under/overrepresentation?
The authors are choosing three datasets that are all isolated handwritten digits recognition datasets. There are other problems such as handwritten characters (e.g. Chinese), Caltech 101 silhouettes, that also have binary representation and would be worth considering in order to assess the generality of the proposed method. Also, if the authors are targeting the handwriting recognition application, more realistic and challenging scenarios could be considered (e.g. non-isolated characters).
Minor comments:
- The last version of the paper (v2) is not properly compiled and the citations are missing.
- The filters shown in Figure 1 should be made bigger.
- In Figure 2 and 4, x and y labels should be made bigger.
- Figure 2 is discussed in the caption of Figure 1. |
KKZ-FeUj-9kjY | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | [
"Xanadu Halkias",
"Sébastien PARIS",
"Herve Glotin"
] | Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage. | [
"dbn",
"deep belief networks",
"digit recognition",
"sparse constraints",
"sparse penalty",
"dbns",
"approximate accuracy rates"
] | https://openreview.net/pdf?id=KKZ-FeUj-9kjY | https://openreview.net/forum?id=KKZ-FeUj-9kjY | ijgMjq-uMOiYw | review | 1,362,144,480,000 | KKZ-FeUj-9kjY | [
"everyone"
] | [
"anonymous reviewer 61fc"
] | ICLR.cc/2013/conference | 2013 | title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
review: In this paper the authors propose a method to make the hidden units of RBM group sparse. The key idea is to add a penalty term to the negative log-likelihood loss penalizing the L2/L1 norm over the activations of the RBM. The authors demonstrate their method on three digit classification tasks. These experiments show similar accuracy to the baseline model but faster convergence.
There is a vast literature on sparse coding and group sparse coding and several references are missing.
Among the works that use group sparse coding but not RBMs there are:
A. Hyvarinen and U. Koster. Complex cell pooling and the statistics of natural images. Network, 18(2):81–100, 2007
K. Kavukcuoglu, M. Ranzato, R. Fergus, Y. LeCun. 'Learning Invariant Features through Topographic Filter Maps'. Proc. of Computer Vision and Pattern Recognition Conference (CVPR 2009), Miami, 2009
while these are some works related to RBM where sparse features are grouped in a similar way to group sparse coding methods:
S. Osindero, M. Welling, and G. E. Hinton. Topographic product
models applied to natural scene statistics. Neural Comp., 18:
344–381, 2006.
M. Ranzato, A. Krizhevsky, G. Hinton, 'Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images'. Proc. of the 13-th International Conference on Artificial Intelligence and Statistics (AISTATS 2010), Italy, 2010
Overall, the novelty of the proposed method is limited. It would be sufficient if the method was well motivated and described (see more detailed comments below). The quality of the work is fair since also the empirical validation is pretty weak: comparisons are reported on three small datasets which are very similar to each other, accuracy is on par with baseline methods and only convergence time is better but this finding has not been analyzed enough to make solid conclusions.
PROS
+ simple method
CONS
- limited novelty
- the method is not well motivated (see below)
- missing references
- unconvincing empirical validation
- writing needs improvements (see below)
Detailed comments
-- The major concern is about the proposed method in general.
On the one hand it makes totally sense to add a sparsity constraint to the negative log likelihood loss. On the other hand, RBM are a probabilistic model and one wonders what this additional term means. If it is interpreted as a prior on the conditional distribution over the hidden units, how is that changing the marginal likelihood, for instance? This takes to the discussion on an alternative approach which is to wrap the group sparsity constraint into the probabilistic model itself and to maximize the likelihood of this. The above references on topographic PoT and cRBM can indeed be interpreted as extensions of RBMs to make hidden units sparse in groups.
A potential problem with the current formulation is that inference of the features does not take into account any sparsity (which is achieved only through learning). Overall after fine-tuning, one may expect little if any sparsity in the hidden units (which may explain why results are so similar to the baseline).
In light of this, it would have been nice if the authors commented on this way to tackle the problem, advantages and disadvantages of each approach.
More generally, I found very weak the motivation of this paper. The reason why sparsity and group sparsity is enforced is pretty vague and unconvincing.
-- The empirical validation is very weak. The three datasets are very homogeneous and results are not better than the baseline.
Why is DBN so much slower? This is the strongest result of the paper in my opinion but it is not clear why that happens.
-- There are lots of imprecise statements. Here a few.
First, the title should be changed from 'DBN' to 'RBM'.
Abstract
The results in the abstract '98.8' may refer to a specific dataset (MNIST?) but does not hold in general.
'optimize the data representation achieved by the DBN …' is vague.
'theoretical approach': I would not call this approach theoretical!
Sec. 1
'due to their generative and unsupervised learning framework': needs to be rephrased.
[2, 3]: these references are not appropriate, perhaps [12, 13]? |
KKZ-FeUj-9kjY | Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint | [
"Xanadu Halkias",
"Sébastien PARIS",
"Herve Glotin"
] | Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage. | [
"dbn",
"deep belief networks",
"digit recognition",
"sparse constraints",
"sparse penalty",
"dbns",
"approximate accuracy rates"
] | https://openreview.net/pdf?id=KKZ-FeUj-9kjY | https://openreview.net/forum?id=KKZ-FeUj-9kjY | dWSK4E1RkeWRi | review | 1,362,193,620,000 | KKZ-FeUj-9kjY | [
"everyone"
] | [
"anonymous reviewer e6d4"
] | ICLR.cc/2013/conference | 2013 | title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
review: Since the last version of the paper (v2) is incomplete my following comments are mainly based on the first version.
This paper proposes using $l_{1,2}$ regularization (for both non-overlapping and overlapping groups) upon the activation possibilities of hidden units in RBMs. Then DBNs pretrained by the resulting mixed Norm RBMs are applied the task of digit recognition.
My main concern is the mistakes in equation 16 (and 17), the core of this paper. The sign of the term of $lambda$ should be minus. There is also a missing $P(h_j=1|x^l)$ in that term. Since these mistakes could explain why the results are worse than the baseline and why the bigger non-overlapping groups (which can make the regularization term smaller) are preferred very well, I do not think they are merely typos. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | obPcCcSvhKovH | review | 1,362,369,360,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"Marc'Aurelio Ranzato"
] | ICLR.cc/2013/conference | 2013 | review: Another minor comment related to the visualization method: since there is no iterative 'inference' step typical of deconv. nets (the features are already given by a direct forward pass) then this method is perhaps more similar to this old paper of mine:
M. Ranzato, F.J. Huang, Y. Boureau, Y. LeCun, 'Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition'. Proc. of Computer Vision and Pattern Recognition Conference (CVPR 2007), Minneapolis, 2007.
http://www.cs.toronto.edu/~ranzato/publications/ranzato-cvpr07.pdf
The only difference being the new pooling instead of max-pooling, the use of ReLU instead of tanh and the tying of the weights (filters optimized for feature extraction but used also for reconstruction).
Overall, I think that even this visualization method constitutes a nice contribution of this paper. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | BBmMrdZA5UBaz | review | 1,362,349,140,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"Marc'Aurelio Ranzato"
] | ICLR.cc/2013/conference | 2013 | review: I really like this paper because:
- it is simple yet very effective and
- the empirical validation not only demonstrates the method but it also helps understanding where the gain comes from (tab. 5 was very useful to understand the regularization effect brought by the sampling noise).
I also found intriguing the visualization method: using deconv. nets to reverse a trained conv. net; that's clever! Maybe that can become a killer app for deconv nets. Videos are also very nice.
However, I was wondering how did you invert the normalization layer? |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | SPk0N0RlUTrqv | review | 1,394,470,920,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer f4a8"
] | ICLR.cc/2013/conference | 2013 | review: I apologize for the delay in my reply.
Verdict: weak accept. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | WilRXfhv6jXxa | review | 1,361,845,800,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer 2b4c"
] | ICLR.cc/2013/conference | 2013 | title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks
review: This paper introduces a new regularization technique based on inexpensive approximations to model averaging, similar to dropout. As with dropout, the training procedure involves stochasticity but the trained model uses a cheap approximation to the average over all possible models to make a prediction.
The paper includes empirical evidence that the model averaging effect is happening, and uses the method to improve on the state of the art for three datasets.
The method is simple and in principle, computationally inexpensive.
Two criticisms of this paper:
-The result on CIFAR-10 was not in fact state of the art at the time of submission; it was just slightly worse than Snoek et al's result using Bayesian hyperparameter optimization.
-I think it's worth mentioning that while this method is computationally inexpensive in principle, it is not necessarily easy to get a fast implementation in practice. i.e., people wishing to use this method must implement their own GPU kernel to do stochastic pooling, rather than using off-the-shelf implementations of convolution and basic tensor operations like indexing.
Otherwise, I think this is an excellent paper. My colleagues and I have made a slow implementation of the method and used it to reproduce the authors' MNIST results. The method works as advertised and is easy to use. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | OOBjrzG_LdOEf | review | 1,362,085,800,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"Ian Goodfellow"
] | ICLR.cc/2013/conference | 2013 | review: I'm excited about this paper because it introduces another trick for cheap model averaging like dropout. It will be interesting to see if this kind of fast model averaging turns into a whole subfield.
I recently got some very good results ( http://arxiv.org/abs/1302.4389 ) by using a model that works well with the kinds of approximations to model averaging that dropout makes. Presumably there are models that get the same kind of synergy with stochastic pooling. I think this is a very promising prospect, since stochastic pooling works so well even with just vanilla rectifier networks as the base model. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | w0XswsNFad7Qu | review | 1,394,470,920,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer f4a8"
] | ICLR.cc/2013/conference | 2013 | review: I apologize for the delay in my reply.
Verdict: weak accept. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | 1toZvrIP-Xvme | review | 1,394,470,860,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer f4a8"
] | ICLR.cc/2013/conference | 2013 | review: I apologize for the delay in my reply.
Verdict: weak accept. |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | ZVb9LYU20iZhX | review | 1,362,379,980,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer f4a8"
] | ICLR.cc/2013/conference | 2013 | title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks
review: Regularization methods are critical for the successful applications of
neural networks. This work introduces a new dropout-inspired
regularization method named stochastic pooling. The method is simple,
applicable applicable to convolutional neural networks with positive
nonlinearites, and achieves good performance on several tasks.
A potentially severe issue is that the results are no longer state of
the art, as maxout networks get better results. But this does not
strongly suggest that stochastic pooling is inferior to maxout, since
the methods are different and can therefore be combined, and, more
importantly, maxout networks may have used a more thorough
architecture and hyperparameter search, which would explain their
better performance.
The main problem with the paper is that the experiments are lacking in
that there is no proper comparison to dropout. While the results on
CIFAR-10 are compared to the original dropout paper and result in an
improvement, the paper does not report results for the remainder of
the datasets with dropout and with the same architecture (if the
architecture is not the same in all experiments, then performance
differences could be caused by architecture differences). It is thus
possible that dropout would achieve nearly identical performance on
these tasks if given the same architecture on MNIST, CIFAR-100, and
SVHN. What's more, when properly tweaked, dropout outperforms the
results reported here on CIFAR-10 as reported in Snoek et al. [A] (sub
15% test error); and it is conceivable that Bayesian-optimized
stochastic pooling would achieve worse results.
In addition to dropout, it is also interesting to compare to dropout
that occurs before max-pooling. This kind of dropout bears more
resemblance to stochastic pooling, and may achieve results that are
similar (or better -- it cannot be ruled out).
Finally, a minor point. The paper emphasizes the fact that stochastic
pooling averages 4^N models while dropout averages 2^N models, where N
is the number of units. While true, this is not relevant, since both
quantities are vast, and the performance differences between the two
methods will stem from other sources.
To conclude, the paper presented an interesting and elegant technique
for preventing overfitting that may become widely used. However, this
paper does not convincingly demonstrate its superiority over dropout.
References
----------
[A] Snoek, J. and Larochelle, H. and Adams, R.P., Practical Bayesian
Optimization of Machine Learning Algorithms, NIPS 2012 |
l_PClqDdLb5Bp | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | [
"Matthew Zeiler",
"Rob Fergus"
] | We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. | [
"regularization",
"data augmentation",
"stochastic pooling",
"simple",
"effective",
"conventional deterministic",
"operations"
] | https://openreview.net/pdf?id=l_PClqDdLb5Bp | https://openreview.net/forum?id=l_PClqDdLb5Bp | lWJdCuzGuRlGF | review | 1,362,101,820,000 | l_PClqDdLb5Bp | [
"everyone"
] | [
"anonymous reviewer cd07"
] | ICLR.cc/2013/conference | 2013 | title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks
review: The authors introduce a stochastic pooling method in the context of
convolutional neural networks, which replaces the traditionally used
average or max pooling operators. In the stochastic pooling a
multinomial distribution is created from input activations and used to
select the index of the activation to pass to the next layer of the
network. On first read, this method resembled that of 'probabilistic max
pooling' by Lee et. al in 'Convolutional Deep Belief Networks for
Scalable Unsupervised Learning of Hierarchical Representations',
however the context and execution are different.
During testing, the authors employ a separate pooling function that is a
weighted sum of the input activations and their corresponding
probabilities that would be used for index selection during training.
This pooling operator is speculated to work as a form of
regularization through model averaging. The authors substantiate this claim with results averaging multiple samples at each pool of the stochastic architectures and visualizations of images obtained from
reconstructions using deconvolutional networks.
Moreover, test set accuracies for this method are given for four
relevant datasets where it appears stochastic pooling CNNs are able to
achieve the best known performance on three. A good amount of detail
has been provided allowing the reader to reproduce the results.
As the sampling scheme proposed may be combined with other regularization techniques, it will be exciting to see how multiple forms of regularization can contribute or degrade test accuracies.
Some minor comments follow:
- Mini-batch size for training is not mentioned.
- Fig. 2 could be clearer on first read, e.g. if boxes were drawn
around (a,b,c), (e,f), and (g,h) to indicate they are operations on
the same dataset.
- In Section 4.2 it is noted that stochastic pooling avoids
over-fitting unlike averaging and max pooling, however in Fig. 3 it
certainly appears that the average and max techniques are not severely
over-fitting as in the typical network training case (with noticeable
degradation in test set performance). However, the network does train
to near zero error on the training set. It may be more accurate to state
that stochastic pooling promotes better generalization yet additional training epochs may make the over-fitting argument clearer.
- Fig. 3 also suggests that additional training may improve the final
reported test set error in the case of stochastic pooling. The
reference to state-of-the-art performance on CIFAR-10 is no longer
current.
- Section 4.8, sp 'proabilities' |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | ELp1azAY4uaYz | review | 1,362,415,140,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer 3c5e"
] | ICLR.cc/2013/conference | 2013 | title: review of Efficient Estimation of Word Representations in Vector Space
review: This paper introduces a linear word vector learning model and shows that it performs better on a linear evaluation task than nonlinear models. While the new evaluation experiment is interesting the paper has too many issues in its current form.
One problem that has already been pointed out by the other reviewers is the lack of comparison and proper acknowledgment of previous models. The log-linear models have already been introduced by Mnih et al. and the averaging of context vectors (though of a larger context) has already been introduced by Huang et al. Both are cited but the model similarity is not mentioned.
The other main problem is that the evaluation metric clearly favors linear models since it checks for linear relationships. While it is an interesting finding that this holds for any of the models, this phenomenon does not necessarily need to lead to better performance. Other non-linear models may have encoded all this information too but not on a linear manifold. The whole new evaluation metric is just showing that linear models have more linear relationships. If this was combined with some performance increase on a real task then non-linearity for word vectors would have been convincingly questioned.
Do these relationships hold for even simpler models like LSA or tf-idf vectors?
Introduction:
Many very broad and general statements are made without any citations to back them up.
The motivation talks about how simpler bag of words models are not sufficient anymore to make significant progress... and then the rest of the paper introduces a simpler bag of words model and argues that it's better. The intro and the first paragraph of section 3 directly contradict themselves.
The other motivation that is mentioned is how useful for actual tasks word vectors can be. I agree but this is not shown. This paper would have been significantly stronger if the vectors from the proposed (not so new) model would have been compared on any of the standard evaluation metrics that have been used for these words. For instance: Turian et al used NER, Huang et al used human similarity judgments, the author himself used them for language modeling. Why not show improvements on any of these tasks?
LDA and LSA are missing citations.
Citation [14] which is in submission seems an important paper to back up some of the unsubstantiated claims of this paper but is not available.
The hidden layer in Collobert et al's word vectors is usually around 100, not between 500 to 1000 as the authors write.
Section 2.2 is impossible to follow for people not familiar with this line of work.
Section 4:
Why cosine distance? A comparison with Euclidean distance would be interesting, or should all word vectors be length-normalized?
The problem with synonyms in the evaluation seems somewhat important but is ignored.
The authors claim that their evaluation metric 'should be positively correlated with' 'certain applications'. That's yet another unsubstantiated claim that could be made much stronger with showing such a correlation on the above mentioned tasks.
Mnih is misspelled in table 3.
The comparisons are lacking consistency. All the models are trained on different corpora and have different dimensionality. Looking at the top 3 previous models (Mikolov 2x and Huang) there seems to be a clear correlation between vector size and overall performance. If one wants to make a convincing argument that the presented models are better, it would be important to show that using the same corpus.
Given that the overall accuracy is around 50%, the examples in table 5 must have been manually selected? If not, it would be great to know how they were selected. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | bf2Dnm5t9Ubqe | review | 1,360,865,940,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer f5bf"
] | ICLR.cc/2013/conference | 2013 | title: review of Efficient Estimation of Word Representations in Vector Space
review: The paper studies the problem of learning vector representations for words based on large text corpora using 'neural language models' (NLMs). These models learn a feature vector for each word in such a way, that the feature vector of the current word in a document can be predicted from the feature vectors of the words that precede (and/or succeed) that word. Whilst a number of studies have developed techniques to make the training of NLMs more efficient, scaling NLMs up to today's multi-billion-words text corpora is still a challenge.
The main contribution of the paper comprises two new NLM architectures that facilitate training on massive data sets. The first model, CBOW, is essentially a standard feed-forward NLM without the intermediate projection layer (but with weight sharing + averaging before applying the non-linearity in the hidden layer). The second model, skip-gram, comprises a collection of simple feed-forward nets that predict the presence of a preceding or succeeding word from the current word. The models are trained on a massive Google News corpus, and tested on a semantic and syntactic question-answering task. The results of these experiments look promising.
Whilst I think this line of research is interesting and the presented results look promising, I do have three main concerns with this paper:
(1) The choice for the proposed models (CBOW and skip-gram) are not clearly motivated. The authors' only motivation appears to be for computational reasons. However, the experiments do not convincingly show that this indeed leads to performance improvements on the task at hand. In particular, the 'vanilla' NLM implementation of the authors actually gives the best performance on the syntactic question-answering task. Faster training speed is mainly useful when you can throw more data at the model, and the model can effectively learn from this new data (as the authors argue themselves in the introduction). The experiments do not convincingly show that this happens. In addition, the comparisons with the models by Collobert-Weston, Turian, Mnih, Mikolov, and Huang appear to be unfair: these models were trained on much smaller corpora. A fair experiment would re-train the models on the same data to show that they learn slower (which is the authors' hypothesis), e.g., by showing learning curves or by showing a graph that shows performance as a function of training time.
(2) The description of the models that are developed is very minimal, making it hard to determine how different they are from, e.g., the models presented in [15]. It would be very helpful if the authors included some graphical representations and/or more mathematical details of their models. Given that the authors still almost have one page left, and that they use a lot of space for the (frankly, somewhat superfluous) equations for the number of parameters of each model, this should not be a problem.
(3) Throughout the paper, the authors assume that the computational complexity of learning is proportional to the number of parameters in the model. However, their experimental results show that this assumption is incorrect: doubling the number of parameters in the CBOW and skip-gram models only leads to a very modest increase in training time (see Table 4).
Detailed comments
===============
- The paper contains numerous typos and small errors. Specifically, I noticed a lot of missing articles throughout the paper.
- 'For many tasks, the amount of … focus on more advanced techniques.' -> This appears to be a contradiction. If speech recognition performance is largely governed by the amount of data we have, than simply scaling up the basic techniques should help a lot!
- 'solutions were proposed for avoiding it' -> For avoiding what? Computation of the full output distribution over words of length V?
- 'multiple degrees of similarities' -> What is meant by degrees here? Different dimensions of similarity? (For instance, Fiat is like Ferrari because they're both Italian but unlike Ferrari because it's not a sports car.) Or different strengths of the similarity? (For instance, Denmark is more like Germany than like Spain.) What about the fact that semantic similarities are intransitive? (Tversky's famous example of the similarity between China and North Korea.)
- 'Moreover, we discuss hyper-parameter selection … millions of words in the vocabulary.' -> I fail to see the relation between hyperparameter selection and training speed. Moreover, the paper actually does not say anything about hyperparameter selection! It only states the initial learning rate is 0.025, and that is linearly decreased (but not how fast).
- Table 2: It appears that the performance of the CBOW model is still improving. How does it perform when D = 1000 or 2000? Why not make a learning curve here (plot performance as a function of D or of training time)?
- Table 3: Why is 'our NNLM' so much better than the other NNLMs? Just because it was trained on more data? What model is implemented by 'our NNLM' anyway?
- Tables 3 and 4: Why is the NNLM trained on 6 billion examples and the others on just 0.7 or 1.6 billion examples? The others should be faster, so easier to train on more data, right?
- It would be interesting if the authors could say something about how these models deal with intransitive semantic similarities, e.g., with the similarities between 'river', 'bank', and 'bailout'. People like Tversky have advocated against the use of semantic-space models like NLMs because they cannot appropriately model intransitive similarities.
- Instead of looking at binary question-answering performance, it may also be interesting to look whether a hitlist of answers contains the correct answer.
- The number of self-citations seems somewhat excessive.
- I tried to find reference [14] to see how it differs from the present paper, but I was not able to find it anywhere. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | 6NMO6i-9pXN8q | review | 1,363,602,720,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer 3c5e"
] | ICLR.cc/2013/conference | 2013 | review: It is really unfortunate that the responding author seems to care
solely about every possible tweak to his model and combinations of his
models but shows a strong disregard for a proper scientific comparison
that would show what's really the underlying reason for the increase
in accuracy on (again) his own new task. For all we know, some of the
word vectors and models that are being compared to in table 4 may have
been trained on datasets that didn't even include the terms used in
the evaluation, or they may have been very rare in that corpus.
The models compared in table 4 still all have different word vector
sizes and are trained on different datasets, despite the clear
importance of word vector size and dataset size. Maybe the
hierarchical softmax on any of the existing models trained on the same
dataset would yield the same performance? There's no way of knowing if
this paper introduced a new model that works better or just a new
training dataset (which won't be published) or just a well selected
combination of existing methods.
The authors write that there are many obvious real tasks that their
word vectors should help but don't show or mention any. NER has been
used to compare word vectors and there are standard datasets out there
for a comparison on which many people train and test. There are human
similarity judgments tasks and datasets that several word vectors have
been compared on. Again, the author seems to prefer to ignore all but
his own models, dataset and tasks. It is still not clear to me what
part of the model gives the performance increase. Is it the top layer
task or is it the averaging of word vectors. Again, averaging word
vectors has already been done as part of the model of Huang et al.. A
link to a wikipedia article by the author is not as strong as an
argument as showing equations that point to the actual difference.
After a discussion among the reviewers, we unanimously feel that the revised version of paper and the accompanying rebuttal do not resolve many of the issues raised by the reviewers, and many of the reviewers' questions (e.g., on which models include nonlinearities) remain unanswered.
For instance, they say that the projection layer in a NNLM has no
nonlinearity but that was not the point, the next layer has one and
from the fuzzy definitions it seems like the proposed model does not.
Does that mean we could just get rid of the non-linearity of the
vector averaging part of Huang's model and get the same performance?
LDA might be in fashion now but papers in high quality conferences are
supposed to be understood in the future as well when some models may
not be so obviously known anymore.
The figure is much less clear in describing the model than the
equations all three reviewers asked for.
Again, there is one interesting bit in here which is the new
evaluation metric (which may or may not be introduced in reference
[14] soon) and the fact that any of these models capture these
relationships linearly. Unfortunately, the entire comparison to
previous work (table 4 and the writing) is unscientific and sloppy.
Furthermore, the possibly new models are not clearly enough defined by
their equations.
It is generally unclear where the improvements are coming from.
We hope the authors will clean up the writing and include proper
comparisons for a future submission. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | OOksUbLar_UGE | review | 1,363,350,360,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer 13e8"
] | ICLR.cc/2013/conference | 2013 | review: In light of the authors' response I'm changing my score for the paper to Weak Reject. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | 3Ms_MCOhFG34r | review | 1,368,188,160,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"Pontus Stenetorp"
] | ICLR.cc/2013/conference | 2013 | review: In response to the request for references made by the first author for the statement regarding semantic similarity being intransitive, I think the reference should be to 'Features of similarity' by Tversky (1977). Please find what I believe to be the relevant portion below.
`We say 'the portrait resembles the person' rather than 'the person resembles the portrait.' We say 'the son resembles the father' rather than 'the father resembles the son.' We say 'an ellipse is like a circle,' not 'a circle is like an ellipse,' and we say 'North Korea is like Red China' rather than 'Red China is like North Korea.''
Lastly, a question that was raised by the reviewers was whether these relationships also hold for LSA or tf-idf vectors to which the first author responded that this has already been discussed in another paper and it turned out not to be the case. I would be very thankful for a reference to this work since I am not familiar with it. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | ddu0ScgIDPSxi | review | 1,363,279,380,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer f5bf"
] | ICLR.cc/2013/conference | 2013 | review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.
Quality rating: Strong reject
Confidence: Reviewer is knowledgeable |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | QDmFD7aPnX1h7 | review | 1,360,857,420,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer 13e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Efficient Estimation of Word Representations in Vector Space
review: The authors propose two log-linear language models for learning real-valued vector representations of words. The models are designed to be simple and fast and are shown to be scalable to very large datasets. The resulting word embeddings are evaluated on a number of novel word similarity tasks, on which they perform at least as well as the embeddings obtained using a much slower neural language model.
The paper is mostly clear and well executed. Its main contributions are a demonstration of scalability of the proposed models and a sensible protocol for evaluating word similarity information captured by such embeddings. The experimental section is convincing.
The log-linear language models proposed are not quite as novel or uniquely scalable as the paper seems to imply though. Models of this type were introduced in [R1] and further developed in [15] and [R2]. The idea of speeding up such models by eliminating matrix multiplication when combining the representations of context words was already implemented in [15] and [R2]. For example, the training complexity of the log-linear HLBL model from [15] is the same as that of the Continuous Bag-of-Words models. The authors should explain how the proposed log-linear models relate to the existing ones and in what ways they are superior. Note that Table 3 does contain a result obtained by an existing log-bilinear model, HLBL, which according to [18] was the model used to produce the 'Mhih NNLM' embeddings. These embeddings seem to perform considerably better then the 'Turian NNLM' embeddings obtained with a nonlinear NNLM on the same dataset, though of course not as well as the embeddings induced on much larger datasets. This result actually strengthens the authors argument for using log-linear models by suggesting that even if one could train a slow nonlinear model on the same amount of data it might not be worth it as it will not necessarily produce superior word representations.
The discussion of techniques for speeding up training of neural language models is incomplete, as the authors do not mention sampling-based approaches such as importance sampling [R3] and noise-contrastive estimation [R2].
The paper is unclear about the objective used for model selection. Was it a language-modeling objective (e.g. perplexity) or accuracy on the word similarity tasks?
In the interests of precision, it would be good to include the equations defining the models in the paper.
In Section 3, it might be clearer to say that the models are trained to 'predict' words, not 'classify' them.
Finally, in Table 3 'Mhih NNLM' should probably read 'Mnih NNLM'.
References:
[R1] Mnih, A., & Hinton G. (2007). Three new graphical models for statistical language modelling. ICML 2007.
[R2] Mnih, A., & Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. ICML 2012.
[R3] Bengio, Y., & Senecal, J. S. (2008). Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4), 713-722. |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | sJxHJpdSKIJNL | review | 1,363,326,840,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer f5bf"
] | ICLR.cc/2013/conference | 2013 | review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.
Quality rating: Strong reject
Confidence: Reviewer is knowledgeable |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | mmlAm0ZawBraS | review | 1,363,279,380,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer f5bf"
] | ICLR.cc/2013/conference | 2013 | review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.
Quality rating: Strong reject
Confidence: Reviewer is knowledgeable |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | qX8Cq3hI2EXpf | review | 1,363,279,380,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"anonymous reviewer f5bf"
] | ICLR.cc/2013/conference | 2013 | review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form.
Quality rating: Strong reject
Confidence: Reviewer is knowledgeable |
idpCdOWtqXd60 | Efficient Estimation of Word Representations in Vector Space | [
"Tomas Mikolov",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
] | We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day and one CPU to derive high quality 300-dimensional vectors for one million vocabulary from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring various types of word similarities. We intend to publish this test set to be used by the research community. | [
"word representations",
"vectors",
"efficient estimation",
"vector space",
"novel model architectures",
"continuous vector representations",
"words",
"large data sets",
"quality"
] | https://openreview.net/pdf?id=idpCdOWtqXd60 | https://openreview.net/forum?id=idpCdOWtqXd60 | C8Vn84fqSG8qa | review | 1,362,716,940,000 | idpCdOWtqXd60 | [
"everyone"
] | [
"Tomas Mikolov"
] | ICLR.cc/2013/conference | 2013 | review: We have updated the paper (new version will be visible on Monday):
- added new results with comparison of models trained on the same data with the same dimensionality of the word vectors
- additional comparison on a task that was used previously for comparison of word vectors
- added citations, more discussion about the prior work
- new results with parallel training of the models on many machines
- new state of the art result on Microsoft Research Sentence Completion Challenge, using combination of RNNLMs and Skip-gram
- published the test set
We welcome discussion about the paper. The main contribution (that seems to have been missed by some of the reviews) is that we can use very shallow models to compute good vector representation of words. This can be very efficient, compared to currently popular model architectures.
As we are very interested in the deep learning, we are also interested in how this term is being used. Unfortunately, there is an increasing amount of work that tries to associate itself with deep learning, although it has nothing to do with it. According to Bengio+LeCun's paper 'Scaling learning algorithms towards AI', deep architectures should be capable of representing and learning complex functions, composed of simpler functions. The complex functions at the same time cannot be efficiently represented and learned by shallow architectures (basically those that have only 1 or 0 non-linearities). Thus, any paper that claims to be about 'deep learning' should first prove that the given performance cannot be achieved with a shallow model. This has been already shown for deep neural networks for speech recognition and vision problems (one hidden layer is not enough to reach the same performance that more hidden layers can achieve). However, when it comes to NLP, the only such result known to me are the Recurrent neural networks, that have been shown to outperform shallow feed-forward networks on some tasks in language modeling.
When it comes to learning continuous representations of words, such thorough comparison is missing. In our current paper, we actually show that there might be nothing deep about the continuous word vectors - one cannot simply add few hidden layers and label some technique 'deep' to gain attraction. Correct comparison with shallow techniques is necessary.
Hopefully, our paper will improve common understanding of what deep learning is about, and will help to keep the track towards the original goals. We did not write our opinion directly in the paper, as we believe it belongs more to the discussion part of the conference, where people can react to our claims.
Detailed responses are below:
Reviewer Anonymous 13e8:
The log-linear language models proposed are not quite as novel or uniquely scalable as the paper seems to imply though. Models of this type were introduced in [R1] and further developed in [15] and [R2].
- We have added the citations and some discussion; however, note that we directly follow model architecture proposed earlier, in 'T. Mikolov. Language Modeling for Speech Recognition in Czech, Masters thesis, Brno University of Technology, 2007.', plus the hierarchical softmax proposed in 'F. Morin, Y. Bengio. Hierarchical Probabilistic Neural Network Language Model. AISTATS, 2005.'; the novelty of our current approach is in the new architectures that work significantly better than the previous ones (we have added this comparison in the new version of the paper), and the Huffman tree based hierarchical softmax.
For example, the training complexity of the log-linear HLBL model from [15] is the same as that of the Continuous Bag-of-Words models
- Assuming one will use diagonal weight matrices as is mentioned in [15], the computational complexity will be similar. We have added this information to the paper. Our proposed architectures are however easier to implementation than HLBL, and also it does not seem that we would obtain better vectors with HLBL (just by looking at the table with results - HLBL seems to have performance close to NNLM, ie. does not capture the semantic regularities in words as well as the Skip-gram). Moreover, I was confused about computational complexity of the hierarchical log-bilinear model: in [R2], it is reported that training time for model with 100 hidden units on the Penn Treebank setup is 1.5 hours; for our CBOW model it is a few seconds. So I don't know if author uses the diagonal weight matrices always or not.
Additionally, the perplexity results in [R2] are rather weak, even worse than simple trigram model. My explanation of HLBL performance is this: the model does not have non-linearities, thus, it cannot model N-grams. An example of such feature is 'if word X and Y occurred after each other, predict word Z'; the linear model can only represent features such as 'X predicts Z, Y predicts Z'. This means that the HLBL language model will probably not scale up well to large data sets, as it can model only patterns such as bigram, skip-1-bigram, skip-2-bigram etc. (and will thus behave slightly as a cache model, and will improve with longer context - which was actually observed in [R1] and [15]). Also note that the comparison in [R1] with NNLM is flawed, as the result from (Bengio, 2003) is from model that was small and not fully trained (due to computational complexity).
To conclude, the HLBL is very interesting model by itself, but we have chosen simpler architecture that follows our earlier work that aims to solve simpler problem - we do not try to learn a language model, just the word vectors. Detailed discussion about HLBL is out of scope of our current paper.
The discussion of techniques for speeding up training of neural language models is incomplete, as the authors do not mention sampling-based approaches such as importance sampling [R3] and noise-contrastive estimation [R2].
- As our paper is already quite long, we do not plan to discuss speedup techniques that we did not use in our work. It can be a topic for future work.
The paper is unclear about the objective used for model selection. Was it a language-modeling objective (e.g. perplexity) or accuracy on the word similarity tasks?
- The cost function that we try to minimize during training is the usual one (cross-entropy), however we choose the best models based on the performance on the word similarity task.
In the interests of precision, it would be good to include the equations defining the models in the paper.
- Unfortunately the paper is already too long, so we just refer to prior work where similar models are properly defined. If we will extend the paper in the future, we will add the equations.
Reviewer Anonymous f5bf:
Concern (1): We added a table with comparison of models trained on the same data. The results strongly support our previous claims (we had some of these results already before the first version of the paper was submitted, but due to lack of time these did not appear in the paper).
(2): We added a Figure that illustrates the topology of the models, and kept the equations as we consider them important.
(3): No, see the equations and Table 4.
The paper contains numerous typos and small errors. Specifically, I noticed a lot of missing articles throughout the paper.
- We hope that small errors and missing articles are not the most important issue in research papers.
'For many tasks, the amount of … focus on more advanced techniques.'
- The introduction was updated.
What about the fact that semantic similarities are intransitive? (Tversky's famous example of the similarity between China and North Korea.)
- We are not aware of famous example of Tversky. Please provide reference.
'Moreover, we discuss hyper-parameter selection … millions of words in the vocabulary.' -> I fail to see the relation between hyperparameter selection and training speed. Moreover, the paper actually does not say anything about hyperparameter selection! It only states the initial learning rate is 0.025, and that is linearly decreased (but not how fast).
- Note that structure and size of the model is also hyper-parameter, as well as fraction of used training data; it is not just the learning rate. However, we simplified the text in the paper.
Table 2: It appears that the performance of the CBOW model is still improving. How does it perform when D = 1000 or 2000? Why not make a learning curve here (plot performance as a function of D or of training time)?
- That is an interesting experiment that we actually performed, but it would not fit into the paper.
Table 3: Why is 'our NNLM' so much better than the other NNLMs? Just because it was trained on more data? What model is implemented by 'our NNLM' anyway?
- Because it was trained in parallel using hundreds of CPUs. It is a feedforward NNLM.
Tables 3 and 4: Why is the NNLM trained on 6 billion examples and the others on just 0.7 or 1.6 billion examples? The others should be faster, so easier to train on more data, right?
- We did not have these numbers during submission of the paper, but these results were added to the actual version of the paper. The new model architectures are faster for training than NNLM, and provide better results in our word similarity tasks.
It would be interesting if the authors could say something about how these models deal with intransitive semantic similarities, e.g., with the similarities between 'river', 'bank', and 'bailout'. People like Tversky have advocated against the use of semantic-space models like NLMs because they cannot appropriately model intransitive similarities.
- We are not aware of Tversky's arguments.
Instead of looking at binary question-answering performance, it may also be interesting to look whether a hitlist of answers contains the correct answer.
- We performed this experiment as well; of course, top-5 accuracy is much better than top-1. However, it would be confusing to add these results into the paper (too many numbers).
The number of self-citations seems somewhat excessive.
- We added more citations.
I tried to find reference [14] to see how it differs from the present paper, but I was not able to find it anywhere.
- This paper should become available on-line soon.
Reviewer Anonymous 3c5e:
One problem that has already been pointed out by the other reviewers is the lack of comparison and proper acknowledgment of previous models. The log-linear models have already been introduced by Mnih et al. and the averaging of context vectors (though of a larger context) has already been introduced by Huang et al. Both are cited but the model similarity is not mentioned.
- As we explained earlier, we followed our own work that was published before these papers. We aim to learn word vectors, not language models. Note also that log-linear models and the bag-of-words representation are both very general and well known concepts, not unique to neural network language modeling. Also, Mnih introduced log-bilinear language model, not log-linear models - please read: http://en.wikipedia.org/wiki/Log-linear_model
and http://en.wikipedia.org/wiki/Bag-of-words_model
The other main problem is that the evaluation metric clearly favors linear models since it checks for linear relationships. While it is an interesting finding that this holds for any of the models, this phenomenon does not necessarily need to lead to better performance. Other non-linear models may have encoded all this information too but not on a linear manifold. The whole new evaluation metric is just showing that linear models have more linear relationships. If this was combined with some performance increase on a real task then non-linearity for word vectors would have been convincingly questioned.
- Note that projection layer in NNLM also does not have any non-linearity; Mnih's HLBL model does not have any non-linearity even in the hidden layer. We added more results in the paper, however can you be more specific what 'real task' means? The tasks we used are perfectly valid for a wide range of applications.
Do these relationships hold for even simpler models like LSA or tf-idf vectors?
- This is discussed in another paper. In general, linear operations do not work well for LSA vectors.
Many very broad and general statements are made without any citations to back them up.
- Please be more specific.
The motivation talks about how simpler bag of words models are not sufficient anymore to make significant progress... and then the rest of the paper introduces a simpler bag of words model and argues that it's better. The intro and the first paragraph of section 3 directly contradict themselves.
- This part of the paper was rewritten. However, N-gram models are mentioned in the introduction; not bag-of-words models. Also note that the paper is about computationally efficient continuous representations of words. We do not introduce simple bag of words model, but log-linear model with distributed representations of bag-of-words features (in case of CBOW model).
The other motivation that is mentioned is how useful for actual tasks word vectors can be. I agree but this is not shown. This paper would have been significantly stronger if the vectors from the proposed (not so new) model would have been compared on any of the standard evaluation metrics that have been used for these words. For instance: Turian et al used NER, Huang et al used human similarity judgments, the author himself used them for language modeling. Why not show improvements on any of these tasks?
- We believe that our task is very interesting by itself. The applications are very straightforward.
LDA and LSA are missing citations.
- We are not using LDA nor LSA in our paper. Moreover, these concepts are generally very well known.
The hidden layer in Collobert et al's word vectors is usually around 100, not between 500 to 1000 as the authors write.
- We do not claim that hidden layer in Collobert et al's word vectors is usually between 500-1000. We actually point out that 50 and 100-dimensional word vectors have insufficient capacity, and the same holds for size of the hidden layer. The 500 - 2000 dimensional hidden layers are mentioned for NNLMs. We also provide reference to our prior paper that shows empirically that you have to use more than 100 neurons in the hidden layer, unless your training data is tiny ('Strategies for training large scale neural network language models').
Section 2.2 is impossible to follow for people not familiar with this line of work.
- This section is not crucial for understanding of the paper. However, if you are interested in this part, we provided several references for that work.
Why cosine distance? A comparison with Euclidean distance would be interesting, or should all word vectors be length-normalized?
- We use normalized word vectors. Empirically, this works better.
The authors claim that their evaluation metric 'should be positively correlated with' 'certain applications'. That's yet another unsubstantiated claim that could be made much stronger with showing such a correlation on the above mentioned tasks.
- While we have also results on another tasks, the point of this paper is not to describe all possible applications, but to introduce techniques for efficient estimation of word vectors from large amounts of data.
The comparisons are lacking consistency. All the models are trained on different corpora and have different dimensionality. Looking at the top 3 previous models (Mikolov 2x and Huang) there seems to be a clear correlation between vector size and overall performance. If one wants to make a convincing argument that the presented models are better, it would be important to show that using the same corpus.
- Such comparison was added to the new version of the paper.
Given that the overall accuracy is around 50%, the examples in table 5 must have been manually selected? If not, it would be great to know how they were selected.
- Maybe this will sound surprising, but examples in Table 5 have accuracy only about 60%. We did choose several easy examples from our Semantic-Syntactic test set (so that it would be easy to judge correctness for the readers), and some manually by trying out what the vectors can represent. Note that we did not simply hand-pick the best examples; this is the real performance. |
zzy0H3ZbWiHsS | Audio Artist Identification by Deep Neural Network | [
"胡振",
"Kun Fu",
"Changshui Zhang"
] | Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others. | [
"mirex",
"important tasks",
"imirsel",
"audio artist identification",
"deep neural network",
"great contributions",
"music information retrieval",
"mir"
] | https://openreview.net/pdf?id=zzy0H3ZbWiHsS | https://openreview.net/forum?id=zzy0H3ZbWiHsS | Zg8fgYb5dAUiY | review | 1,362,479,820,000 | zzy0H3ZbWiHsS | [
"everyone"
] | [
"anonymous reviewer 589d"
] | ICLR.cc/2013/conference | 2013 | title: review of Audio Artist Identification by Deep Neural Network
review: A brief summary of the paper’s contributions. In the context of prior work:
This paper builds a hybrid model based on Deep Belief Network (DBN) and Stacked Denoising Autoencoder (SDA) and applies it to Audio Artist Identification (AAI) task. Specifically, the proposed model is constructed with a two-layer SDA in the lower layers, a two-layer DBN in the middle, and a logistic regression classification layer on the top. The proposed model seems to achieve good classification performance.
An assessment of novelty and quality:
The paper proposes a hybrid deep network by stacking denoising autoencoders and RBMs.
Although this may be a new way of building a deep network, it seems to be a minor modification of the standard methods. Therefore, the novelty seems to be limited.
More importantly, motivation or justification about hybrid architecture is not clearly presented. Without a clear motivation or justification, this method doesn’t seem to be technically interesting. To make a fair comparison to other baseline methods, the SDA2-DBN2 should be compared to DBN4 or SDA4, but there are no such comparisons.
Although the classification performance by the proposed method is good, the results are not directly comparable to other work in the literature. It will be helpful to apply some widely used methods in authors’ data set as additional control experiments;
The paper isn’t well polished. There are many awkward sentences and grammatical errors.
Other comments:
Figure 2 is anecdotal and is not convincing enough.
Authors use some non-standard terminology. For example, what does “MAP paradigm” mean?
In Table 3, rows corresponding to “#DA layers”, “#RBM layers”, “#logistic layers” are unnecessary.
A list of pros and cons (reasons to accept/reject)
pros:
+ Literature review seems fine.
+ good (but incomplete) empirical classification results
cons:
- lack of clear motivation or justification of the hybrid method; lack of proper control experiments
- the results are not comparable to other published work
- unpolished writing (lots of awkward sentences and grammatical errors). |
zzy0H3ZbWiHsS | Audio Artist Identification by Deep Neural Network | [
"胡振",
"Kun Fu",
"Changshui Zhang"
] | Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others. | [
"mirex",
"important tasks",
"imirsel",
"audio artist identification",
"deep neural network",
"great contributions",
"music information retrieval",
"mir"
] | https://openreview.net/pdf?id=zzy0H3ZbWiHsS | https://openreview.net/forum?id=zzy0H3ZbWiHsS | obqUAuHWC9mWc | review | 1,362,137,160,000 | zzy0H3ZbWiHsS | [
"everyone"
] | [
"anonymous reviewer 8eb9"
] | ICLR.cc/2013/conference | 2013 | title: review of Audio Artist Identification by Deep Neural Network
review: This paper present an application of an hybrid deep learning model to the task of audio artist identification.
Novelty:
+ The novelty of the paper comes from using an hybrid unsupervised learning approach by stacking Denoising Auto-Encoders (DA) and Restricted Boltzman Machines (RBM).
= Another minor novelty is the application of deep learning to artist identification. However, deep learning has already been applied to similar tasks before such as genre recognition and automatic tag annotation.
- Unfortunately, I found that the major contributions of the paper are not exposed clearly enough in the introduction.
Quality of presentation:
- The quality of the presentation leaves to be desired. A more careful proofreading would have been required. There are several sentences with gramatical errors. Several verbs or adjectives are wrong. The writing style is also sometimes inadequate for a scientific paper (ex. 'we will review some fantastic work', 'we can build many outstanding networks'). The quality of the english is, in general, inadequate.
- The abstract does not present in a relevant and concise manner the essential points of the paper.
- Also, there is a bit of confusion in between the introduction and related work sections, as most of the introduction is also about related work.
Reference to previous work:
+ Previous related work coverage is good. Previous work in deep learning and its applications in MIR, as well as work in audio artist identification are well covered.
- In the beginning of section 5: 'It's known that Bach, Beethoven and Brahms, known as the three Bs, shared some style when they wrote their composition.' I find this claim, without any reference, hard to understand. Bach, Beethoven and Brahms are from 3 different musical eras. How are these 3 composers more related than the others?
Quality of the research.
- Although the idea of using a hybrid deep learning system might be new, no justification as to why such a system should work better is presented in the paper.
- In the experiments, the authors compare the hybrid model to pure models. However, the pure models all have less layers than the hybrid model. Why didn't the authors compare same-depth models? I feel it would have made a much stronger point.
- Although the authors describe in details the theory behind SDAs and DBNs, there is little to no detail about the hyper-parameters used in the actual model (number of hidden units, number of unsupervised epochs, regularization, etc.). How was the data corrupted for the DA? White Noise, or random flipped bits? How many steps in the CD? These details would be important to reproduce the results.
- In the beginning of section 3 and 6, the authors mention that they think their model will project the data into a semantic space which is very sparse. How is your model learning a sparse representation? Have you used sparseness constraints in your training? If so, there is no mention of it in the paper. |
zzy0H3ZbWiHsS | Audio Artist Identification by Deep Neural Network | [
"胡振",
"Kun Fu",
"Changshui Zhang"
] | Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others. | [
"mirex",
"important tasks",
"imirsel",
"audio artist identification",
"deep neural network",
"great contributions",
"music information retrieval",
"mir"
] | https://openreview.net/pdf?id=zzy0H3ZbWiHsS | https://openreview.net/forum?id=zzy0H3ZbWiHsS | k3fr32tl6qARo | review | 1,362,226,800,000 | zzy0H3ZbWiHsS | [
"everyone"
] | [
"anonymous reviewer b7e1"
] | ICLR.cc/2013/conference | 2013 | title: review of Audio Artist Identification by Deep Neural Network
review: This paper describes work to collect a new dataset with music from 11 classical composers for the task of audio composer identification (although the title, abstract, and introduction use the phrase 'audio artist identification' which is a different task). It describes experiments training a few different deep neural networks to perform this classification task.
The paper is not very novel. It describes existing deep architectures applied to a new version of an existing dataset for an existing task.
The quality of the paper is not very high. The comparisons of the models were not systematic and because it is a new dataset, they cannot be compared directly to results on other datasets of existing models. There are very few specifics given about the models used (layer sizes, cost functions, input feature types, specific input features).
The use of mel frequency spectrum seems dubious for this task. What distinguishes classical works from different composers is generally harmonic and melodic content, which mel frequency spectrum ignores almost entirely.
Few details are given about the make-up of the new dataset. Are these orchestral pieces, chamber pieces, concertos, piano pieces, etc? How many clips came from each piece? How many clips came from each movement? The use of clips from different movements of the same piece in the training and test sets might account for the increase in accuracy scores relative to previous MIREX results. Movements from the same piece generally share many characteristics like recording conditions, production, instrumentation, and timbre, which are the main characteristics captured by mel frequency spectrum. They also generally share harmonic and melodic content.
And finally, the 'Three B's' that the authors refer to, Bach, Beethoven, and Brahms, are very different composers from different musical eras. Their works should not be easily confused with each other, and so the fact that the proposed algorithm does confuse them is concerning. Potentially it indicates the weakness of the mel spectrum for performing this task.
Pros:
- Literary presentation of the paper is high (although there are a number of strange word substitutions)
- Decent summary of existing work
- New dataset might be useful, if it is made public, although it is pretty small
Cons:
- Little novelty
- Un-systematic comparisons of systems
- Features don't make much sense
- Few details on actual systems compared and on the dataset
- Few generalizable conclusions |
zzy0H3ZbWiHsS | Audio Artist Identification by Deep Neural Network | [
"胡振",
"Kun Fu",
"Changshui Zhang"
] | Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL), organizer of the MIREX, drives researchers in the MIR field to develop more advanced system to fulfill the tasks. One of the important tasks is the Audio Artist Identification task, or the AAI task. We implemented a Deep Belief Network (DBN) to identify the artist by audio signal. As a matter of copyright, IMIRSEL didn't publish there data set and we had to construct our own. In our data set we got an accuracy of 69.87% without carefully choosing parameters while the best result reported on MIREX is 69.70%. We think our method is promising and we want to discuss with others. | [
"mirex",
"important tasks",
"imirsel",
"audio artist identification",
"deep neural network",
"great contributions",
"music information retrieval",
"mir"
] | https://openreview.net/pdf?id=zzy0H3ZbWiHsS | https://openreview.net/forum?id=zzy0H3ZbWiHsS | qbjSYWhow-bDl | review | 1,362,725,700,000 | zzy0H3ZbWiHsS | [
"everyone"
] | [
"胡振"
] | ICLR.cc/2013/conference | 2013 | review: Thank you. We will revise our paper as soon as possible.
Zhen |
7IOAIAx1AiEYC | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | [
"Tom Schaul",
"Yann LeCun"
] | Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free. | [
"rates",
"sparse",
"parallelization",
"stochastic",
"adaptive learning rates",
"gradients adaptive",
"gradients recent work",
"successful framework",
"stochastic gradient descent",
"sgd"
] | https://openreview.net/pdf?id=7IOAIAx1AiEYC | https://openreview.net/forum?id=7IOAIAx1AiEYC | UUYiUZMOiCjl1 | review | 1,362,388,500,000 | 7IOAIAx1AiEYC | [
"everyone"
] | [
"anonymous reviewer 7b8e"
] | ICLR.cc/2013/conference | 2013 | title: review of Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients
review: This is a paper that builds up on the adaptive learning rate scheme proposed in [1], for choosing learning rate when optimizing a neural network.
The first result (eq. 3) is that of figuring out an optimal learning rate schedule for a given mini-batch size n (a very realistic scenario, when one cannot adapt the size of the mini-batch during training because of computational and architectural constraints).
The second interesting result is that of setting the learning rates in those cases where one has sparse gradients (rectified linear units etc) -- this results in an effective rescaling of the rates by the number of non-zero elements in a given minibatch.
The third nice result is the observation that in a sparse situation the gradient update directions are mostly orthogonal. Taking this intuition to the logical conclusion, the authors thus induce a re-weighing scheme that essentially encourages the gradient updates to be orthogonal to each other (by weighing them proportionally to 1/number of times they interfere with each other). While the authors claim that this can be computationally expensive generally speaking, for problems of realistic sizes (d is in the tens of millions and n is a few dozen examples), this can be quite interesting.
The final interesting result is that of adapting the curvature estimation to the fact that with the advent of rectified linear units we are often faced with optimizing non-smooth loss functions. The authors propose a method that is based on finite differences (with some robustness improvements) and is vaguely similar to what is done in SGD-QN.
Generally this is a very well-written paper that proposes a few sensible and relatively easy to implement ideas for adaptive learning rate schemes. I expect researchers in the field to find these ideas valuable. One disappointing aspect of the paper is the lack of real-world results on things other than simulated (and known) loss functions. |
7IOAIAx1AiEYC | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | [
"Tom Schaul",
"Yann LeCun"
] | Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free. | [
"rates",
"sparse",
"parallelization",
"stochastic",
"adaptive learning rates",
"gradients adaptive",
"gradients recent work",
"successful framework",
"stochastic gradient descent",
"sgd"
] | https://openreview.net/pdf?id=7IOAIAx1AiEYC | https://openreview.net/forum?id=7IOAIAx1AiEYC | hhgfZq1Yf5hzr | review | 1,362,001,560,000 | 7IOAIAx1AiEYC | [
"everyone"
] | [
"anonymous reviewer 7318"
] | ICLR.cc/2013/conference | 2013 | title: review of Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients
review: summary:
The paper proposes a new variant of stochastic gradient descent that is fully automated (no
hyper-parameter to tune) and is robust to various scenarios, including mini-batches,
sparsity, and non-smooth gradients. It relies on an adaptive learning rate that takes
into account a moving average of the Hessian. The result is a single algorithm that takes about 4x
memory (with respect to the size of the model) and is easy to implement.
The algorithm is tested on purely artificial tasks, as a proof of concept.
review.
- The paper relies on some previous algorithm (bbprop) that is not provided here and only
explained briefly on page 5, while first used on page 2. It would have been nice to provide
more information about it earlier.
- The 'parallelization trick' using mini-batches is good for a single-machine approach, where
one can use multiple cores, but is thus limited by the number of cores. Also, how would
this 'interfere' with Hogwild type of updates, which also uses efficiently multi-core approaches
for SGD?
- Obviously, results on real large datasets would have been welcomed (I do think experiments
on artificial datasets are very useful as well, but they may hide the fact that we have not
fully understood the complexity of real datasets). |
7IOAIAx1AiEYC | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | [
"Tom Schaul",
"Yann LeCun"
] | Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free. | [
"rates",
"sparse",
"parallelization",
"stochastic",
"adaptive learning rates",
"gradients adaptive",
"gradients recent work",
"successful framework",
"stochastic gradient descent",
"sgd"
] | https://openreview.net/pdf?id=7IOAIAx1AiEYC | https://openreview.net/forum?id=7IOAIAx1AiEYC | _VZcVNP2cvtGj | review | 1,362,529,800,000 | 7IOAIAx1AiEYC | [
"everyone"
] | [
"Tom Schaul, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their constructive comments. We'll try to clarify a few points they bring up:
Parallelization: The batchsize-aware adaptive learning rates (equation 3) are applicable independently of how the minibatches are computed, whether on a multi-core machine, or across multiple machines. They are in fact complementary to the asynchronous updates of Hogwild, in that they remove its need for tuning learning rate ('gamma') and learning rate decay ('beta').
Bbprop: The original version of vSGD (presented in [1]) does indeed require the 'bbprop' algorithm as one of its components to estimate element-wise curvature. One of the main points of this paper, however, is to replace it by a less brittle approach, based on finite-differences (section 5).
Large-scale experiments: We conduced a broad range of such experiments in the precursor paper [1] which demonstrated that the performance of the adaptive learning rates does correspond to the best-tuned SGD. Under the assumption that curvature does not change too fast, the original vSGD (using bbprop) and the one presented here (using finite differences) are equivalent, so those results are still valid -- but for more difficult (non-smooth) learning problems the new variant should be much more robust.
We'd also like to point out that an open-source implementation is now available at
http://github.com/schaul/py-optim/blob/master/PyOptim/algorithms/vsgd.py |
7IOAIAx1AiEYC | Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients | [
"Tom Schaul",
"Yann LeCun"
] | Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free. | [
"rates",
"sparse",
"parallelization",
"stochastic",
"adaptive learning rates",
"gradients adaptive",
"gradients recent work",
"successful framework",
"stochastic gradient descent",
"sgd"
] | https://openreview.net/pdf?id=7IOAIAx1AiEYC | https://openreview.net/forum?id=7IOAIAx1AiEYC | _5dVjqxuVf560 | review | 1,361,565,480,000 | 7IOAIAx1AiEYC | [
"everyone"
] | [
"anonymous reviewer 0321"
] | ICLR.cc/2013/conference | 2013 | title: review of Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients
review: This is a followup paper for reference [1] which describes a parameter free adaptive method to set learning rates for SGD. This submission cannot be read without first reading [1]. It expands the work in several directions: the impact of minibatches, the impact of sparsity and gradient orthonormality, and the use of finite difference techniques to approximate curvature. The proposed methods are justified with simple theoretical considerations under simplifying assumptions and with serious empirical studies. I believe that these results are useful.
On the other hand, an opportunity has been lost to write a more substantial self-contained paper. As it stands, the submission reads like three incremental contributions stappled together. |
6elK6-b28q62g | Behavior Pattern Recognition using A New Representation Model | [
"Eric qiao",
"Peter A. Beling"
] | We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems. | [
"irl",
"agents",
"basis",
"mdp",
"reward functions",
"behavior pattern recognition",
"new representation model",
"use",
"inverse reinforcement learning"
] | https://openreview.net/pdf?id=6elK6-b28q62g | https://openreview.net/forum?id=6elK6-b28q62g | zkxNBUsiN6B38 | review | 1,363,763,280,000 | 6elK6-b28q62g | [
"everyone"
] | [
"Eric qiao"
] | ICLR.cc/2013/conference | 2013 | review: Based on the reviews, a revised version will be updated on arXiv tonight. Thanks. |
6elK6-b28q62g | Behavior Pattern Recognition using A New Representation Model | [
"Eric qiao",
"Peter A. Beling"
] | We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems. | [
"irl",
"agents",
"basis",
"mdp",
"reward functions",
"behavior pattern recognition",
"new representation model",
"use",
"inverse reinforcement learning"
] | https://openreview.net/pdf?id=6elK6-b28q62g | https://openreview.net/forum?id=6elK6-b28q62g | KK9P-lgBP7-mW | review | 1,362,703,740,000 | 6elK6-b28q62g | [
"everyone"
] | [
"anonymous reviewer 8f06"
] | ICLR.cc/2013/conference | 2013 | title: review of Behavior Pattern Recognition using A New Representation Model
review: Summary:
The paper presents an approach to activity recognition based on inverse reinforcement learning. It proposes an IRL algorithm based on Gaussian Processes. Evaluation is presented for classification and clustering of behavior in a grid-world problem and the secretaries problem.
Comments:
The problem called here 'behavior pattern recognition' is very actively studied currently under the name 'activity recognition', using both unsupervised and supervised methods, some quite sophisticated. See:
http://en.wikipedia.org/wiki/Activity_recognition
and references therein. You should clarify why you need a new term here, if somehow the problem you propose here is different. Based on its definition, it does not seem to be any different.
Moreover, this problem has also been studied in reinforcement learning in the context of learning by demonstration. See the recent work of George Konidaris, eg:
G.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. Robot Learning from Demonstration by Constructing Skill Trees. The International Journal of Robotics Research 31(3), pages 360-375, March 2012.
Andrew Thomaz, eg:
L. C. Cobo, C.L. Isbell, and A.L. Thomaz. 'Automatic task decomposition and state abstraction from demonstration.' In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2012.
These papers use a significantly more advanced setup, in which trajectories also need to be segmented into activities (which is more realistic). Classification or unsupervised learning could used on top of their output as well.
The paper should discuss the proposed approach in a broader context.
The experiments presented in the paper are quite simplistic and need to be improved. Specifically, in the grid-world case, the data is generated exactly according to the paradigm for which the algorithm was designed. Also, states are fully observable. What happens if you have more classes? Partial observability, so the environment is not really an MDP in the observations? Also, the FT and FE competitor methods are quite simplistic, one would expect that better way of encoding the trajectory (e.g using PCA or other forms of dimensionality reduction) would work better.
Note that comparing against the other IRL methods for this task is tricky, because they are designed to recover a reward function that can be then used to train an RL agent, not a reward function which can be used to recognize future behavior. These are different goals. Since many reward functions can generate the same behavior, but some will make different behaviors easier to recognize than others, the paper should emphasize which of the algorithmic choices here are specifically designed to help the recognition.
For the secretaries problem, classification results should also be included. The description of the problem is very brief and makes it hard to tell how difficult the problem is (Fig 3a seems to suggest it's not that hard).
Including a more realistic domain, where activities change during a trajectory, would make the paper a lot more convincing.
From a writing point of view, there are many small grammar mistakes, especially using 'the' and 'a' and the paper requires a careful proofreading. Also, the experimental description should specify all detail necessary, eg value of hyper-parameters for the GP and describe how these have been/can be chosen. Running times would also be useful to include, as well as error bars on the graphs.
Pros:
- IRL would be useful to use in this setting, and the proposed approach makes sense
Cons:
- Related references are omitted or not discussed
- Novelty of the proposed approach is low
- The experiments are very limited and simplistic |
6elK6-b28q62g | Behavior Pattern Recognition using A New Representation Model | [
"Eric qiao",
"Peter A. Beling"
] | We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems. | [
"irl",
"agents",
"basis",
"mdp",
"reward functions",
"behavior pattern recognition",
"new representation model",
"use",
"inverse reinforcement learning"
] | https://openreview.net/pdf?id=6elK6-b28q62g | https://openreview.net/forum?id=6elK6-b28q62g | N6tX5S-nXZNbo | review | 1,363,762,920,000 | 6elK6-b28q62g | [
"everyone"
] | [
"Eric qiao"
] | ICLR.cc/2013/conference | 2013 | review: To Reviewer 698b.
---------------------
Response: We propose a new problem that aims to categorize decision-makers by learning from the samples of their sequential decision-making behavior. The first key to success of this problem is an appropriately designed feature representation constructed from observations of the actions taken by decision makers. We note that there is little systematic research addressing feature representation for this problem, however, with almost all the existing wok using heuristically selected measure of the raw behavior data. The novelty of our work is not about the classification/clustering algorithms, but rather about how to automatically learn the features that can effectively represent the behavior data. In other words, we propose solving the problem of learning to recognize decision-makers by characterizing behavior with a universal, abstract multi-dimensional feature vector, which does not rely on domain-specific expert knowledge.
To Reviewer 08b2
-------------------------
We are in agreement a decision strategy is a plan or policy that maps state to action for an agent. We further agree with the reviewer that the testing of agents with different decision strategies is an interesting scenario. Indeed we have performed such experimentation, some of which was reported on in the original version of the paper. In our GridWorld test, we simulate agents using the optimal actions output by the forward planning of an MDP model. We generate two classes of agents. The decision strategies of the two classes of agents, which are generated by the forward planning of MDP models, are different; e.g., one group of agents avoids the boundary of GridWorld, while the other group does not. The reviewer’s suggestion gives us a new idea to produce another scenario in which agents are adopting different decision strategies. In this case, the agents have the same destination goals, but with different uncertainties while making decisions. Therefore, the observed decision strategies will be different. Theoretically, when we model behavior in MDP space, the recovered reward functions will be different and still provide an effective way for categorizing the agents.
“The secretary problem was first tested on three different strategies that achieve the same goal. This is exactly the interesting scenario. Disappointingly, though, these results were not described in detail or shown (last paragraph on page 8 -- I don't see any details about the results of this experiment).”
Response: Yes, we have conducted experiments to categorize three groups of agents with different strategies. The results are 100% accurate; therefore we did not show the accuracy charts here. Instead, it is more difficult to categorize two groups of agents whose decision strategies are formed by the same heuristic decision strategy but with different parameters. As shown in Figure 3, action space-based methods have more difficulty solving this problem. In the secretary problem, we first model this problem using a MDP model. The reward function is learned from the observed decision trajectories by using IRL. It is a vector whose entry is a reward learned for a state. To visually display the feature vectors, we project the multi-dimensional reward vector into the 2-D space by using PCA. The discussion and display of results for these experiments have been expanded in the current version of the paper.
Minor: The conclusions are not well grounded in the current work -- what data make the authors think that this method would be even more superior in real data?
Response: The reviewer’s point that this conclusion outstrips the work is quite valid. We have removed this claim. Our initial enthusiasm was due in part to good results from another research project that uses our proposed method to analyze behavioral data from high frequency trading algorithms in real exchanges.
To Reviewer 8f06
---------------------------
Response: “Activity recognition, or called goal recognition, plan recognition, intent recognition in other fields, aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions.”
Please note that activity recognition aims to recognize the policies or goals; however, our proposed problem aims to categorize the agents by the abstraction of their behavior, not requiring identification of goals. The reward function recovered by IRL algorithms in our problem may not only characterize goal information but also more abstraction of behavior, e.g. the information on decision strategy or more abstract behavior characteristics. Our problem is motivated by some real-world problem that comes from domains like high frequency trading of stocks and commodities, where there is considerable interest in identifying new market players and algorithms based on observations of trading actions, but little hope in learning the precise policies employed by these agents.
Activity recognition problem and our problem are also fundamentally different in the following points. First, plan recognition problem, which is formulated on MDP model, assumes that the reward function of a MDP model (or they call cost function) is known, but our IRL method is to infer the reward function, which is the variable to learn. Second, the goal recognition problem assumes that a set of possible goals is known as a prior. Given the possible goals, one can model the decision problems using MDP/POMDP with the known reward functions. However, our problem is more general and does not require inference of goals. The reward functions are considered as random variables. The existing plan recognition problem is like to infer the goal in a finite and discrete space, but our IRL model indirectly estimates goals in an infinite and continuous space.
“These papers use a significantly more advanced setup, in which trajectories also need to be segmented into activities (which is more realistic). Classification or unsupervised learning could used on top of their output as well. The paper should discuss the proposed approach in a broader context. “
Response: The two papers offer interesting idea about activity recognition. We are not aware of the research that has advanced setup to segment trajectories into activities, but agree the work is relevant and interesting and have added appropriate citations to the new version of the paper. As we mentioned, our problem is different from learning from demonstration that aims to recognize the goal or the policy. Given a number of trajectories for an agent, our problem is to automatically extract features that capture overall behavior pattern for this agent. It is not our purpose to find several sub-goals (or skills, activities mentioned in those two papers) for one agent. However, these two algorithms give us some idea for further research about an advanced method considering the sub-goals for categorization of the agents.
“Specifically, in the grid-world case, the data is generated exactly according to the paradigm for which the algorithm was designed. Also, states are fully observable. What happens if you have more classes? Partial observability, so the environment is not really an MDP in the observations?”
Response: If the states are partially observable, we can apply POMDP to solve those problems. The focus of our paper is to demonstrate that the agents’ behavior can be better categorized in reward space. We agree that it would shed light on the significance of our method by adding further study in the problems with partially observable states.
Our experiments with the secretary problem are perhaps closer to the spirit separating the regimes of data generation and learning than the original text would have suggested, and we have substantially changed the secretary problem section of the paper to make the correspondences clearer. As a surrogate for the action trajectories of humans, we use agents that we generate action trajectories for randomly sampled secretary problems using the cutoff rule (CR), successive non-candidate counting rule (SNCCR), and the candidate counting rule (CCR). For a given decision rule (CR, SNCCR, CCR), we simulate a group of agents that adopt this rule, differentiating individuals in a group by adding Gaussian noise to the rule's parameter. The details of the process are given in Algorithm 2. We use IRL and observed actions to learn reward functions for the MDP model given in Algorithm 2.
It is critical to understand that the state space for this MDP model captures nothing of the history of candidates, and as a consequence is wholly inadequate for the purposes of modeling SNCCR and CCR. In other words, for general parameters, neither SNCCR nor CCR can be expressed as a policy for the MDP in Algorithm 2. (There does exist an MDP in which all three of the decision rules can be expressed as policies, but the state space for this model is exponentially larger.) Hence, for two of the rules, the processes that we use to generate data and the processes we use to learn are distinct.
“Also, the FT and FE competitor methods are quite simplistic, one would expect that better way of encoding the trajectory (e.g using PCA or other forms of dimensionality reduction) would work better.”
Response: The standard and widely used feature extraction scheme calculates the statistical metrics directly on the raw observation data. This relies on application-specific expert knowledge, and there may be verities of methods to quantize the observations. We would like to see whether the standard algorithm would work better after applying advanced feature extract tools, such as PCA, on top of the FT and FE feature vectors.
“Since many reward functions can generate the same behavior, but some will make different behaviors easier to recognize than others, the paper should emphasize which of the algorithmic choices here are specifically designed to help the recognition. “
Response: The core of this paper is to propose a universal feature representation framework that can effectively characterize the agents’ behavior, instead of studying which IRL algorithm is the best choice in this framework. A major contribution is that our feature representation is automatic and does not require domain-specific expert knowledge for selecting the feature metrics. We aim to prove that the reward feature space recovered by IRL algorithms is superior to the standard methods that manually construct the statistical features on the raw observation data.
However, other papers (see, e.g., Q. Qiao and P. Beling. Inverse reinforcement learning via Gaussian process. ACC, 2011) also find that the IRL algorithm with GP also excels in the training of an apprentice RL agent. This means that some IRL algorithms may perform better in both the recognition problem and the apprenticeship learning problem. In turn, this may show the evidence that the reward features can better characterize the decision-making behaviors. There may be some link between the problems of recognizing the behavior patterns and replicating the behavior policies.
“value of hyper-parameters for the GP and describe how these have been/can be chosen.”
Response: The algorithm on page 5 mentioned how to optimize the hyper-parameters for the GP. |
6elK6-b28q62g | Behavior Pattern Recognition using A New Representation Model | [
"Eric qiao",
"Peter A. Beling"
] | We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems. | [
"irl",
"agents",
"basis",
"mdp",
"reward functions",
"behavior pattern recognition",
"new representation model",
"use",
"inverse reinforcement learning"
] | https://openreview.net/pdf?id=6elK6-b28q62g | https://openreview.net/forum?id=6elK6-b28q62g | PPs3ZO_pnzZTb | review | 1,362,473,880,000 | 6elK6-b28q62g | [
"everyone"
] | [
"anonymous reviewer 08b2"
] | ICLR.cc/2013/conference | 2013 | title: review of Behavior Pattern Recognition using A New Representation Model
review: This paper proposes a behavior pattern recognition framework that re-represents the problem of classifying behavior trajectories as a problem of classifying reward functions instead. Since the reward function of the agent that is classified is not known, it is inferred using inverse reinforcement learning (IRL). Comparison of the proposed method to standard trajectory classification methods shows that the former performs much better that the latter on a grid task and to a lesser extent (but still better) on an optimal stopping problem (the secretary problem).
The novelty here is not in the classification or IRL algorithms, but rather in the idea that it is better to classify reward functions than observed state-action sequences. The real test case for this proposal, I think, is a case in which agents differ in their behavioral strategy, but not in the (real) reward function which they were working for. It might be that in that case the proposed method would still excel as the inferred reward function would be different for those with different strategies -- and this would be a very nice demonstration. For instance, if I prefer to go to the goal using the scenic route, and someone else takes the route with less traffic (but a lower speed limit), we might both reach the destination at the same time, thus maximizing some external reward function correctly, but IRL might infer that I assign reward to scenery and the other person to not having to compete with other cars on the road.
Unfortunately, such a scenario was not tested in the paper. The gridworld task involved two classes of agents that differed only in their (true) reward function, not their strategies. In that case it seems obvious that classifying based on reward functions would be a good idea (it was still nice to see that the proposed method does very well even with very short trajectories --- I am not saying there was no merit to the experiments shown, just that this was not the strongest test case for the proposed framework).
The secretary problem was first tested on three different strategies that achieve the same goal. This is exactly the interesting scenario. Disappointingly, though, these results were not described in detail or shown (last paragraph on page 8 -- I don't see any details about the results of this experiment). Instead, the authors show results for a different experiment in which all agents had the same strategy but differed in the cutoff rule (which is akin to a reward function), as well as an experiment comparing a heuristic strategy to a random one. In both cases these are not the interesting test cases. (As an aside, I also found Figure 3 which describes these results unclear: how was reward defined for these simulations? what are the axes in the different subplots?)
Minor: The conclusions are not well grounded in the current work -- what data make the authors think that this method would be even more superior in real data? |
6elK6-b28q62g | Behavior Pattern Recognition using A New Representation Model | [
"Eric qiao",
"Peter A. Beling"
] | We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of the agents in terms of forward planning for the MDP. We use IRL to learn reward functions and then use these reward functions as the basis for clustering or classification models. Experimental studies with GridWorld, a navigation problem, and the secretary problem, an optimal stopping problem, suggest reward vectors found from IRL can be a good basis for behavior pattern recognition problems. Empirical comparisons of our method with several existing IRL algorithms and with direct methods that use feature statistics observed in state-action space suggest it may be superior for behavior recognition problems. | [
"irl",
"agents",
"basis",
"mdp",
"reward functions",
"behavior pattern recognition",
"new representation model",
"use",
"inverse reinforcement learning"
] | https://openreview.net/pdf?id=6elK6-b28q62g | https://openreview.net/forum?id=6elK6-b28q62g | kA2a1ywTaHAT3 | review | 1,362,418,320,000 | 6elK6-b28q62g | [
"everyone"
] | [
"anonymous reviewer 698b"
] | ICLR.cc/2013/conference | 2013 | title: review of Behavior Pattern Recognition using A New Representation Model
review: I am not a huge expert in reinforcement learning but nonetheless I have to say this paper is quite confusing to me. I had a hard time understanding the point. Moreover, I think the topic of this paper has nothing to do whatsoever with the interests of this conference, namely representation learning, so I suggest the authors resubmit this work elsewhere.
cons:
- not clearly written
- not relevant to this conference |
V_-8VUqv8h_H3 | The Manifold of Human Emotions | [
"Seungyeon Kim",
"Fuxin Li",
"Guy Lebanon",
"Irfan Essa"
] | Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. | [
"human emotions",
"manifold",
"model",
"presence",
"positive",
"negative emotions",
"text document",
"higher dimensional extensions",
"sentiment concept"
] | https://openreview.net/pdf?id=V_-8VUqv8h_H3 | https://openreview.net/forum?id=V_-8VUqv8h_H3 | DsMNDQOdK3o4y | comment | 1,362,951,000,000 | ADj5N2hoX0_ox | [
"everyone"
] | [
"Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa"
] | ICLR.cc/2013/conference | 2013 | reply: 1. P(Y|Z) can be computed using Bayes rule on P(Z|Y). We had to remove lots of details due to the space limits. Detailed implementation is on our full paper on ArXiv (http://arxiv.org/abs/1202.1568).
2. A lot of references and comparisons are omitted because of the space limits, but we will try to include suggested references and discussions. |
V_-8VUqv8h_H3 | The Manifold of Human Emotions | [
"Seungyeon Kim",
"Fuxin Li",
"Guy Lebanon",
"Irfan Essa"
] | Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. | [
"human emotions",
"manifold",
"model",
"presence",
"positive",
"negative emotions",
"text document",
"higher dimensional extensions",
"sentiment concept"
] | https://openreview.net/pdf?id=V_-8VUqv8h_H3 | https://openreview.net/forum?id=V_-8VUqv8h_H3 | C4MuPqjpEwP7S | review | 1,362,239,340,000 | V_-8VUqv8h_H3 | [
"everyone"
] | [
"anonymous reviewer e0d0"
] | ICLR.cc/2013/conference | 2013 | title: review of The Manifold of Human Emotions
review: This paper proposes a new method for sentiment analysis of text
documents based on two phases: first, learning a continuous vector
representation of the document (a projection on the mood manifold) and
second, learning to map from this representation to the sentiment
classes. The assumption behind this model is that such an intermediate
smooth representation might help the classification, especially in the case where the number of sentiment classes is rather large (32) as it is studied here.
The idea of modeling the relationship between emotions labels (Y) and
documents (X, encoded using bag-of-words) via an intermediate
representation (Z) is appealing and seems to be a good direction to pursue.
The main idea of the present model is to build a kind of two-layers
network(X->Z->Y), where each layers has its own architecture and learning
process and is trained in a (weakly) supervised way. Unfortunately, it is not
exactly clear how this training works. On one hand, the layer X->Z is trained via maximum likelihood, setting the supervision on Z via least-square regression for Z->Y (X and Y are known). But on the other hand, it is written that the layer Z->Y is obtained via MDS or kernel PCA. This is a bit puzzling.
I also think that the dimension of the manifold (l) should be given.
There is a lack of references (maybe due to the page limit) Still,
(Glorot et al., ICML11), (Chen et al., ICML12) or (Socher et al.,
EMNLP11) should be discussed since all these papers presents neural
network archiecture for sentiment analysis and basically learn an
intermediate representation of documents.
Pros:
- interesting setting with weak supervision
- new data
Cons:
- many unclear points
- lack references |
V_-8VUqv8h_H3 | The Manifold of Human Emotions | [
"Seungyeon Kim",
"Fuxin Li",
"Guy Lebanon",
"Irfan Essa"
] | Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. | [
"human emotions",
"manifold",
"model",
"presence",
"positive",
"negative emotions",
"text document",
"higher dimensional extensions",
"sentiment concept"
] | https://openreview.net/pdf?id=V_-8VUqv8h_H3 | https://openreview.net/forum?id=V_-8VUqv8h_H3 | zzCNIJyUdvSfw | comment | 1,362,951,060,000 | C4MuPqjpEwP7S | [
"everyone"
] | [
"Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa"
] | ICLR.cc/2013/conference | 2013 | reply: 1. It is more related to latent variable models than neural network as it doesn’t have any activation function between layers. Moreover, neural network is learned by back-propagation algorithms, but our model is learnt using maximum likelihood with marginalizing latent variable Z. Linear regression part is result of Dirac’s delta approximation. Detailed implementation is on our full paper on ArXiv (http://arxiv.org/abs/1202.1568).
2. Dimension of the manifold for experiments were 31 due to the fact of using MDS on centroids of 32 classes.
3. A lot of references are omitted because of the space limits (3 pages!), but we will try to include a few key references. |
V_-8VUqv8h_H3 | The Manifold of Human Emotions | [
"Seungyeon Kim",
"Fuxin Li",
"Guy Lebanon",
"Irfan Essa"
] | Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. | [
"human emotions",
"manifold",
"model",
"presence",
"positive",
"negative emotions",
"text document",
"higher dimensional extensions",
"sentiment concept"
] | https://openreview.net/pdf?id=V_-8VUqv8h_H3 | https://openreview.net/forum?id=V_-8VUqv8h_H3 | ADj5N2hoX0_ox | review | 1,362,105,540,000 | V_-8VUqv8h_H3 | [
"everyone"
] | [
"anonymous reviewer 9992"
] | ICLR.cc/2013/conference | 2013 | title: review of The Manifold of Human Emotions
review: This paper introduces a model for sentiment analysis aimed at capturing blended, non-binary notions of sentiment. The paper uses a novel dataset of >1 million blog posts (livejournal) using 32 emoticons as labels. The model uses a Gaussian latent variable to embed bag of words documents into a vector space shaped by the emoticon labels. Experiments on the blog dataset demonstrate the latent vector representations of documents are better than bag of words for multiclass sentiment classification.
I'm a bit confused as to how the inference procedure works. The conditional distribution P(Z|X) is Gaussian as is P(Z|Y), but the graphical structure suggests P(Y|Z) needs to be given and a Gaussian doesn't make sense here as Y is a binary vector. More generally, given the description in the paper I don't understand how to implement the proposed model. A more detailed description of the model itself and the inference procedure could help here.
There is a lot of recent work on representation learning for sentiment. No discussion of related models or comparisons to other work are given. In particular, recursive neural networks (e.g. Socher et al EMNLP 2011) have been used to learn document vector representations for multi-dimensional sentiment. Additionally, Maas et al (ACL 2011) introduce a benchmark dataset and similar latent-variable graphical model for sentiment representations. Overall, I think substantially more discussion of previous work is necessary. The two citations given aren't reflective of much of the recent work on learning text representations or sentiment.
To summarize:
- The proposed dataset sounds interesting and could advance representation learning for multi-dimensional sentiment analysis
- The proposed model is very unclear. With the current explanation I am unable to verify its correctness
- Practically no discussion of the large amount of previous work on learning representations for text and sentiment |
KHMdKiX2lbguE | Boltzmann Machines and Denoising Autoencoders for Image Denoising | [
"KyungHyun Cho"
] | Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high. | [
"boltzmann machines",
"autoencoders",
"image",
"models",
"noise",
"experiments",
"probabilistic model",
"local image patches",
"various researchers",
"deep"
] | https://openreview.net/pdf?id=KHMdKiX2lbguE | https://openreview.net/forum?id=KHMdKiX2lbguE | PLgu8d4J3rRz9 | review | 1,362,189,720,000 | KHMdKiX2lbguE | [
"everyone"
] | [
"anonymous reviewer bf00"
] | ICLR.cc/2013/conference | 2013 | title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising
review: This paper is an empirical comparison of the different models (Boltzmann Machines and Denoising Autoencoders) on the task of image denoising. Based on the experiments the authors claimed the increasing model depth improves the denoising performances when the level of noise is high.
PROS
+ Exploring DBMs for images denosing is indeed interesting and important.
CONS
- There is little novelty in this paper.
- The experiments could be not easily reproduced since some important details of the experimental setting are not provided (see below).
- The proposed models were not compared with any state-of-the-art denoising method.
Detailed comments
Page 4: The authors should explicitly specify how they constructed the matrix D.
Page 4: There could be some mistakes in equation 5. Please list the detailed derivation.
Page 4: Equation 5 is not a standard routine. You should firstly make an assumption about noise, for example, $ ilde{v}=v+n,nsim mathcal{N}(mu,,sigma^2)$. Then $P( ilde{v}|v)=rac{P( ilde{v}|v)P(v)}{P( ilde{v})}propto P( ilde{v}|v)P(v)$.
Page 6: Some high-resolution natural image data sets (ImageNet or Berkeley Segmentation Benchmark) could be more proper than CIFAR-10 for this denoising task.
Page 6: The authors should describe how to construct training set in detail. |
KHMdKiX2lbguE | Boltzmann Machines and Denoising Autoencoders for Image Denoising | [
"KyungHyun Cho"
] | Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high. | [
"boltzmann machines",
"autoencoders",
"image",
"models",
"noise",
"experiments",
"probabilistic model",
"local image patches",
"various researchers",
"deep"
] | https://openreview.net/pdf?id=KHMdKiX2lbguE | https://openreview.net/forum?id=KHMdKiX2lbguE | VC6Ay131A-y1w | review | 1,362,494,700,000 | KHMdKiX2lbguE | [
"everyone"
] | [
"Kyunghyun Cho"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewer (d5d4),
Thank you for your thorough review and comments.
- 'the paper fails to compare against robust Boltzmann machines (Tang et al.,
CVPR 2012)'
Thanks for pointing it out, and I agree that the RoBM be tried as well. It
will be possible to use the already trained GRBMs to initialize an RoBM to
see how much improvement the RoBM can bring.
- 'More thorough analysis and better training might ... make the conclusion
more convincing.'
One of the main claims in this paper was to show that a family of Boltzmann
machines is a potential alternative to denoising autoencoders which have
recently been proposed and shown to excel in image denoising. Also, another
was that it is possible to perform *blind* image denoising where no prior
information on noise types and levels was available at the training time. For
this, I have only conducted the limited set of experiments that barely
confirms these claims.
I fully agree that follow-up research/experiments will reveal more insights
into the effect of model structures, training procedures and the choice of
training sets on the performance of image denoising.
- 'How did you tune the hyperparameters?'
This was one question to which I was not able to find a clear answer. Since
the task I considered was completely *blind*, meaning not even types of test
images were not known, I had to resort to using the reconstruction error on
the validation image patches, which, I believe, is not a good indicator of
the generalization performance in this case.
I agree that more investigation is definitely required in this matter of
validation in image denoising.
- 'whether the authors faithfully implemented Xie et al.’s method'
The training procedure used in this paper is slightly different from the one
used by Xie et al. The procedure is also different from how Burger et al.
trained denoising autoencoders. Comparison to their trained models (for
instance, Burger et al. made their learned models parameters available onlin)
will be one of the potential next steps in this research. |
KHMdKiX2lbguE | Boltzmann Machines and Denoising Autoencoders for Image Denoising | [
"KyungHyun Cho"
] | Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high. | [
"boltzmann machines",
"autoencoders",
"image",
"models",
"noise",
"experiments",
"probabilistic model",
"local image patches",
"various researchers",
"deep"
] | https://openreview.net/pdf?id=KHMdKiX2lbguE | https://openreview.net/forum?id=KHMdKiX2lbguE | ppSEYjkaMGYj5 | review | 1,362,411,780,000 | KHMdKiX2lbguE | [
"everyone"
] | [
"Kyunghyun Cho"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers (bf00) and (9120),
First of all, thank you for your thorough reviews.
Please, find my response to your comments below. A revision
of the paper that includes the fixes made accordingly will
be available at the arXiv.org tomorrow (Tue, 5 Mar 2013
01:00:00 GMT).
To both reviewers (bf00) and (9120):
Thank you for pointing out the mistakes in some of the
equations. As both of you noticed, there was a problem in
Eq. (4). There should be as many binary matrices $D_n$ as
there are image patches from each test image. This mistake
happened as I was trying to put the procedure into a more
compact mathematical equation. There was no mistake in the
implementation. I have fixed the equation and its
accompanying text description accordingly.
Also, in Eq. (5), the term inside the last expectation
shoudl be p(v | h) instead of p( ilde{v} | h). Thank you
again for pointing that out.
To reviewer (bf00):
- 'The experiments could be not easily reproduced'
I have added the detailed configurations used for
training each model as an appendix.
- 'models were not compared with any state-of-the-art
denoising method'
The aim of the paper was to propose an alternative deep
neural network model that might be used in place of
denoising autoencoders which were recently proposed to
excel in the task of image denoising. However, I agree
that the comparison with other approaches would make the
paper more interesting.
- 'should describe how to construct training set in detail'
I have added how the training set was constructed.
- 'high-resolution natural image data sets could be more
proper'
I fully agreed with you and thank you for the suggestion.
I have run the same set of experiment using the training
set constructed from the Berkeley Segmentation Benchmark
(BSD-500). The results closely resemble those presented
already in the paper, and the overall trend did not
change. I have appended the new figure (same format as
Fig. 2) obtained using the new training set in the
appendix.
To reviewer (9120):
- 'Proper layer-sizes cross validation should be performed'
I fully agree with you. One most important thing that
be checked, in my opinion, is the performance of
single-layer models having the same number of hidden
units as multi-layer models (e.g., GRBM with 640 and 1280
hidden units trained on 8x8 patches). I will run the
experiment, and if time permits, will add the results in
the paper.
- 'In eq of hat{v}_i'
Thank you for pointing it out. I have mistakenly put
p(h| ilde{v}), implicitly assuming a case of RBM with
binary hidden units, where p(h| ilde{v}) coincides with
E[h| ilde{v}]. However, for a general BM, you are
correct and I have fixed it accordingly. |
KHMdKiX2lbguE | Boltzmann Machines and Denoising Autoencoders for Image Denoising | [
"KyungHyun Cho"
] | Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high. | [
"boltzmann machines",
"autoencoders",
"image",
"models",
"noise",
"experiments",
"probabilistic model",
"local image patches",
"various researchers",
"deep"
] | https://openreview.net/pdf?id=KHMdKiX2lbguE | https://openreview.net/forum?id=KHMdKiX2lbguE | CIGoQSPKoZIKs | review | 1,362,486,600,000 | KHMdKiX2lbguE | [
"everyone"
] | [
"anonymous reviewer d5d4"
] | ICLR.cc/2013/conference | 2013 | title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising
review: A brief summary of the paper's contributions, in the context of prior work.
The paper proposed to use Gaussian deep Boltzmann machines (GDBM) for image denoising tasks, and it empirically compared the denoising performance to another state-of-the-art method based on stacked denoising autoencoders (Xie et al.). From empirical evaluations, the author confirms that deep learning models (DBM or DAE) achieve good performance in image denoising. Although DAE performs better than GDBM in many cases, GDBM can be still useful for image denoising since it doesn’t require prior knowledge on the types or levels of noise.
An assessment of novelty and quality.
The main contribution of the paper is the use of Gaussian DBM for denoising. It also provides comparison against existing models (stacked denoising autoencoders). Although, technical novelty is limited, it is still interesting that GRBM without the knowledge of specific noise (in target tasks) can perform well for image denoising.
One major problem is that the paper fails to compare against a closely related work on robust Boltzmann machines (Tang et al., CVPR 2012), which is specifically designed for denoising tasks.
Conclusions drawn from empirical evaluation seem fairly reasonable, but not very surprising. Also, the results look somewhat random. More thorough analysis and better training might clean up the results and make the conclusion more convincing.
Other comments:
How did you tune the hyperparameters (l2 regularization, learning rate, number of hidden nodes, etc.) of the model? The trained model is sensitive to these hyperparameters, so it should have been tuned to some validation task.
A list of pros and cons (reasons to accept/reject).
pros:
- Empirical evaluation of two deep models on image denoising tasks seems to confirm the usefulness of deep learning methods for image denoising.
- It’s very interesting that models trained from natural images (CIFAR-10) work well for unrelated images.
cons:
- The main contribution of the paper is the use of GRBM/DBM for denoising. However, it’s not clear whether GRBM/DBMs are better than DAE(4).
- There is no comparison against robust Boltzmann machines (Tang et al., CVPR 2012).
- It would have been nice to make the results comparable to other published work (e.g., Xie et al.). The results in the paper raise questions about whether the authors faithfully implemented Xie et al.’s method. |
KHMdKiX2lbguE | Boltzmann Machines and Denoising Autoencoders for Image Denoising | [
"KyungHyun Cho"
] | Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empirically evaluate the two models on three different sets of images with different types and levels of noise. Throughout the experiments we also examine the effect of the depth of the models. The experiments confirmed our claim and revealed that the performance can be improved by adding more hidden layers, especially when the level of noise is high. | [
"boltzmann machines",
"autoencoders",
"image",
"models",
"noise",
"experiments",
"probabilistic model",
"local image patches",
"various researchers",
"deep"
] | https://openreview.net/pdf?id=KHMdKiX2lbguE | https://openreview.net/forum?id=KHMdKiX2lbguE | tO_8tX3y-7SXz | review | 1,362,361,020,000 | KHMdKiX2lbguE | [
"everyone"
] | [
"anonymous reviewer 9120"
] | ICLR.cc/2013/conference | 2013 | title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising
review: The paper conducts an empirical performance comparison, on the task of image denoising, where the denoising of large images is based on combining densoing of small patches. In this context, the study compares usign, as small patch denoisers, deep denoising autoencoders (DAE) versus deep Boltzmann machines with a Gaussian visible layer (GDBM, which correspond to GRBM for a single hidden layer). Compared to recent work on deep DAE for image denoising shown to be competitive with state-of-the-art methods (Burger et al. CVPR'2012; Xie et al. NIPS'2012) this work rather considers *blind* denoising tasks (test noise kind and level not the same as that used during training). For the DBM part, the work builds on the author's authors' GDBM (presented at NIPS 2011 workshop on deep learning), and performs denoising as the expectation of visibles given inferred expected first layer hidden obtained through varitional approximation.
The paper essentially draws the following observations
a) GRBM / GDBM can be equally successful at image denoising as deep DAEs,
b) increased depth seems to help denoising, particularly at higher noise levels.
c) interestingly a GRBM (single layer) appears often competitive compared to a GDBN with more layers (while deeper DAEs more systematically improve over single layer DAE).
Pros:
+ I find it is a worthy empirical comparison study to make.
+ it reasonably supports observation a), which is not too surprising (also there's no clear winner).
+ the observation I find most interesting, and worthy of further *digging* is c) as it could be, as suggested by the authors, a concrete effect of the limitations of the variational approximation in the GDBN.
Cons:
- empirical performance comparison of similar models, but does not yield much insight regarding wherefrom differences may arise (no other sensitivity analysis except final denoising perofrmance)
- while I would a priori be inclined to believe in b), I find the methodology lacking here. It seems a single fixed hiddden layer size has been considered, the same for all layers, so that deeper networks had necessarily more parameters. Proper layer-sizes cross validation should be performed before we can hope to draw a scientific conclusion with respect to the benefit of depth.
- mathematical notation is often a little sloppy or buggy:
Eq 4: if D is n x d as claimed Dx will be n x 1, so it cannot correspond to n 'patches' as claimed (unless your patches are but 1 pixel).
Eq 5: I belive last p( ilde{v}|h) should be p(v|h)
Next eq: p(v | h=mu) is an abuse since h are binary.
In eq of hat{v}_i : p(h| ilde{v}) is problematic, since there's no bound value for h. Shouldn't it rather be E(h| ilde{v}) ? |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | WYMDnhGXd0L_5 | review | 1,363,287,900,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"Vamsi Potluru"
] | ICLR.cc/2013/conference | 2013 | review: Thanks to all the reviewers for their detailed and insightful comments and suggestions.
We are working on incorporating most of them in to our paper and should have the updated version this weekend. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | 9pQNdTOGrb9Pw | review | 1,360,229,520,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"Paul Shearer"
] | ICLR.cc/2013/conference | 2013 | review: The main convergence result in the paper, Theorem 3, does not prove what it purports to prove. Specifically the proof of Theorem 3 refers to a completely different optimization problem than the one the authors claim to be solving on page 5 and throughout the paper.
In the proof the authors replace the nonconvex constraint ||W_j||_2 = 1 on page 5 with the convex relaxation ||W_j||_2 <= 1. This relaxation appears to be standard, but it actually allows W_j to become arbitrarily nonsparse, for one may decrease L2 norm of a given W_j (while keeping ||W_j||_1 = k) simply by averaging a given W_j with a constant vector. Allowing arbitrary nonsparsity defeats the point of the proposed model, which is to maintain the sparsity of the W_j.
To keep the L1/L2 ratio bounded and thus maintain sparsity, the inequality should go in the other direction: ||W_j||_2 >= 1. But this is a nonconvex set so Nesterov's theorems do not apply. Theorem 3 for this problem must be proven by a different route (see for example Attouch 2011, http://www.optimization-online.org/DB_FILE/2010/12/2864.pdf), or one could forget proof and just say the algorithm seems to work fine empirically. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | QOxbO7qFg2Och | comment | 1,364,235,300,000 | YlFHNQiVHDYVP | [
"everyone"
] | [
"Vamsi Potluru"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks again for your detailed comments. We will incorporate them into our paper. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | gWF1WlYIRPpoT | review | 1,362,215,700,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"anonymous reviewer 1d08"
] | ICLR.cc/2013/conference | 2013 | title: review of Block Coordinate Descent for Sparse NMF
review: Summary:
The paper presents a new optimization algorithm for solving NMF problems with the Euclidean norm as fitting cost and subject to sparsity constraints. The sparsity is imposed explicitly by adding an equality constraint to the optimization problem, imposing the sparsity measure proposed in [10] (referred as L1/L2 measure) of the columns of the matrix factors to be equal to a pre-defined constant. The contribution of this paper is to propose a more efficient optimization procedure for this problem. This is obtained mainly due to two variations on the original method introduced in [10]: (i) a block-coordinate descent strategy (ii) a fast algorithm for minimizing the subproblems involved in the obtained block coordinate scheme. Experimental evaluations show that the proposed algorithm runs substantially faster than previous works proposed in [7] and [10]. The paper is well written and the problem is clearly presented.
Pros:
- The paper presents an algorithm to solve an optimization problem
that is significantly faster than available alternatives.
Cons:
- it is not clear why this particular formulation is better than other similar alternatives that can be efficiently optimized
- the proposed approach seems limited to work with the L2 norm as fitting cost.
- the convergence results for the block coordinate scheme is not
applicable to the proposed algorithm
General comment:
1.
The measure used for sparsifying the NMF is an L1/L2 measure proposed in [10] (based on the relationship between the L1 and L2 norm of a given vector). The authors list interesting properties of this measure to justify its use and it seems a good option.
I understand that it is not the purpose of this paper to study or compare different regularizers. However, I believe that the authors should provide clear examples where this precise formulation (with the equality constraint) is better. Maybe even empirical evaluation (or a reference to a paper performing this study). Having a hard constrain in the sparsity level for every data code (or dictionary atom) seems too restrictive.
This is a very relevant issue, since explicitly imposing the sparsity constraint leads to a harder optimization problem with slower optimization algorithms (as explained by the authors). An important modeling advantage is required to justify the increase in complexity.
In the work:
Berry, M. W., et al. 'Algorithms and applications for approximate nonnegative matrix factorization.' Computational Statistics & Data Analysis 52.1 (2007): 155-173.
the authors adopt the sparsity measure form [10] but include it on a Lagrangian formulation. This implicit way of imposing sparsity can be combined with other fitting terms (e.g. beta divergences) and it is easier to optimize.
This was done with a very similar sparsity measure in the work:
V, Tuomas. 'Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria.' Audio, Speech, and Language Processing, IEEE Transactions on 15.3 (2007): 1066-1074.
The author proposes to add to the cost function a sparsity regularization term also of the form L1/L2 and was later used for audio source separation in:
W. Felix, J. Feliu, and B. Schuller. 'Supervised and semi-supervised suppression of background music in monaural speech recordings.' Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, 2012.
2.
The strategy proposed in this paper alternatively fixes one matrix and minimizes over the other in a block coordinate fashion. In contrasts with [10], in which only a descent direction is searched. Maybe here there is another reason for the speed-up?
When the minimization is performed on a matrix factor that is subject to the sparsity constraint the authors employ a block coordinate descent strategy, referred to as sequential-pass. The authors empirically demonstrate that this strategy leads to a significant improvement in time.
The minimization over each block (or columns) leads to a linear maximization problem subject to constraining the L1 and L2 norm to be constant, referred as sparse-opt. The authors propose an algorithm for exactly solving this subproblem. This problem also appears in [10] but in a slightly different way. In [10] the author proposes a heuristic method for projecting a vector to meet the sparsity constraints (sort of a proximal projection but to a non-convex set).
In Theorem 3, the authors present a convergence result for a relaxed version of the sequential-pass. Specifically, they relax the constraint on the L2 norm to be an inequality (instead of an equality). In this new setting, imposing the L1 norm to be constant no longer implies that the sparsity measure is constant. The quotient L1/L2 should be used instead, but this is no longer coincides with the sparse-opt problem.
Other minor comments:
- In Section 2.2 and later in Section 6, the authors refer to the properties that a good sparsity measure should have, according to [12]. I think that it would help the clarity of the presentation to briefly list these properties in Section 2.2 instead of defining them within the text of Section 6.
- In Section 3.1, the equation for the Lagrangian of the problem (5) should also include the term corresponding to the non-negativity constraint on y. This does not affect the derivation, since the multipliers for that constraint would be zero when the y_i are not, thus the obtained values of lambda, mu and obj would remain the same. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | Y8F18yu7HQ6aJ | review | 1,361,826,300,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"Vamsi Potluru"
] | ICLR.cc/2013/conference | 2013 | review: Thanks a lot for pointing this out. You are right about the issue. We
are currently working on fixing the proof, as we hope that in our
particular case the objective function will force the L2 equality
constraint to be active at the optimum. The algorithm does still work
fine in practice, and we have never encountered an occurrence of
divergence in our experiments. We will take out the proof if we
cannot fix it by the review deadline. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | YlFHNQiVHDYVP | review | 1,363,996,080,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"anonymous reviewer d723"
] | ICLR.cc/2013/conference | 2013 | review: Dear authors,
the revision of your paper is appreciated; the three major issues from my review have been resolved.
I agree that explicit constraints may be harder to optimize, but the argument that then (non-expert) users can get the representation they want without fiddling parameters is a very good one and does motivate this line of research. I don't think a radiologist would want to spend too much time analyzing a brain scan using non-intuitive knobs. It would be nice if you could add some fMRI or sMRI images to enhance Figure 1.
Here is a short list of things that may help you improve your paper (I emphasize that these are no must-haves):
- Page 5, replace 'i' with 'i = 1' in the sums of the Lagrangian's derivatives (this would be consistent with the other sums).
- In the first line after the derivatives, add gamma to the Lagrange parameters.
- In the same paragraph, mention that the termination criterion from the for loop of Sparse-opt is equivalent to selecting the one that maximizes b' * y (you may want to cite [5] there).
- In Algorithm 2 (Sparse-opt), p^star should be initialized just before the for loop (for the reason given in my review).
- In line 3 of Algorithm 2, replace the two-element set (with ceil(k^2) and m) with an ordinary for-loop (with the new termination criterion, the order in which {ceil(k^2), ..., m} is traversed is important).
- Page 7, Section 5.1: Tidy up the third bullet point (it's somehow intermixed with the van Hateren data set which you don't seem to use anyways).
- Same section, fourth bullet point: use t or or the URL to SPM5 (this would be consistent with the style you used for the other URLs). Add a point at the end of the text of that bullet point.
- Page 8, Section 5.2, last paragraph: You should add one sentence how the running times behave when Bi-Sparse NMF from the Appendix is used, as there a Sparse-opt is carried out for high-dimensional vectors (dimension there equals the number of samples for the rows of H).
- Page 10: Figure 6 should be moved to be on the same page as Section 5.2, where it is referenced.
- Page 11, Section 7: Remove the 'heuristic' in the final line of the first paragraph.
- Page 12, Reference [15] (Hsieh and Dhillon): The page here still reads 'xx'.
- In Figures 4, 5 and 6 the y axes from the upper rows interfere with those of the lower rows.
- There are also some stray white spaces throughout the paper that should be fixed. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | cc18-e0C8uSHG | review | 1,363,661,460,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"Vamsi Potluru"
] | ICLR.cc/2013/conference | 2013 | review: Anonymous d723:
1. Thanks for pointing out the bug in the projection operator algorithm Sparse-opt. We re-ran all the algorithms on
the datasets based on the suggested bugfix and generated new figures for all the datasets.
2. We highlight the efficiency of our algorithm (O(mlogm)) compared to worst-case scenario of O(m^2) for
Hoyer's projection algorithm. Our algorithm can be further improved to have linear time complexity by using a
partitioning algorithm similar to the one in Quicksort.
3. We have fixed most of the issues such as references to parallel updates, increasing figure sizes, modifying citations, and
adding line numberings.
Anonymous 1d08:
1. Sparsity on the features can be set as user-defined intervals. This is illustrated on the ORL face dataset where
we are able to learn sets of local and global features. In practice, this enables the user to finely tune the
features based on domain knowledge. For instance, in fMRI datasets, one could model the features for the
brain signals and that of the artifacts distinctly based on different sparsity intervals.
2. Implicit regularization may lead to easier optimization problems but can be harder to interpret from a user point of view.
The regularization parameter maps to sparsity values of the features but it is hard to know what this mapping should be before
the algorithm is run.
3. We have fixed the Lagrangian formulation to include the nonnegativity constraints and added a brief list of desirable
properties to section 2.2.
Anonymous 202b:
Sparse PCA and dictionary learning are slightly different formulations than the one considered here.
Also, SPAMS does not consider the exact formulation of the problem we are tackling in this paper.
We are solving a explicitly constrained sparsity problem and this relates to the question posed by reviewer 1d08.
So, a direct comparison of running-times for algorithms solving different problem formulations would not be fair.
Hopefully, the cost of running time of our algorithm pays off for applications where explicitly modeling the user requirements
is of primary importance.
-----------------
We have removed the convergence proof from the present draft based on the comments from the reviewers
and Paul Scherrer. However, we are looking into fixing the proof for the final version.
Also, we are looking into other examples where one would like to explicitly constrain the sparsity of
the factorization.
Thanks again to all the reviewers for the constructive suggestions and insightful questions.
If the arxiv version is not updated by view time, please find a copy at:
http://www.cs.unm.edu/~ismav/papers/ncnmf.pdf |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | OEMFOvtudWEJh | review | 1,362,274,980,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"anonymous reviewer 202b"
] | ICLR.cc/2013/conference | 2013 | title: review of Block Coordinate Descent for Sparse NMF
review: This paper considers a dictionary learning algorithm for positive data. The sparse NMF approach imposes sparsity on the atoms, and positivity on both atoms and decomposition coefficients.
The formulation is standard, i.e., applying a contraint on the L1-norm and L2-norm of the atoms. The (limited) novelty comes in the optimization formulation: (a) with respect to the atoms, block coordinate descent is used with exact updates which are based on an exact projection into a set of constraints (this projection appears to be novel, though the derivation is standard), (b) with respect to the decomposition coefficients, multiplicative updates are used.
Running-time comparisons are done, showing that the new formulation outperforms some existing approaches (from 2004 and 2007).
Pros:
-Clever use of the structure of the problem for algorithm design
Cons:
-The algorithm is not compared to the state of the art (there has been some progress in sparse PCA and dictionary learning since 2007). In particular, the SPAMS toolbox of [19] allows sparse dictionary learning with positivity constraints. A comparison with this toolbox would help to assess the significance of the improvements.
-Limited novelty. |
G0OapcfeK3g_R | Block Coordinate Descent for Sparse NMF | [
"Vamsi Potluru",
"Sergey M. Plis",
"Jonathan Le Roux",
"Barak A. Pearlmutter",
"Vince D. Calhoun",
"Thomas P. Hayes"
] | Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L$_1$ norm. However, present algorithms designed for optimizing the mixed norm L$_1$/L$_2$ are slow and other formulations for sparse NMF have been proposed such as those based on L$_1$ and L$_0$ norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets. | [
"sparsity",
"norm",
"datasets",
"block coordinate descent",
"nmf",
"ubiquitous tool",
"data analysis"
] | https://openreview.net/pdf?id=G0OapcfeK3g_R | https://openreview.net/forum?id=G0OapcfeK3g_R | KKA-Ef3zTjKbl | review | 1,362,186,300,000 | G0OapcfeK3g_R | [
"everyone"
] | [
"anonymous reviewer d723"
] | ICLR.cc/2013/conference | 2013 | title: review of Block Coordinate Descent for Sparse NMF
review: This paper proposes new algorithms to minimize the Non-negative Matrix Factorization (NMF) reconstruction error in Frobenius norm subject to additional sparseness constraints (NMFSC) as originally proposed by [R1]. The original method from [R1] to minimize the reconstruction error is a projected gradient descent. While in [R1] a geometrically inspired method is used to compute the projection onto the sparseness constraints, this paper proposes to use Lagrange multipliers instead. To solve the NMFSC problem, the authors propose to update the basis vectors one at a time (therefore their method is called Sequential Sparse NMF or SSNMF), while in ordinary NMF/NMFSC the entire matrix with the basis vectors is updated at once. Experiments are reported that show that SSNMF is one order of magnitude faster compared to the algorithm of [R1].
The paper may only propose more efficient algorithms to solve a known optimization problem instead of proposing new learnable representations, but the approach is interesting and the results are promising. There are however some major issues with the paper:
(1) The sparseness projection of [R1] is essentially a Euclidean projection onto the intersection of a scaled probabilistic simplex (L1 sphere intersected with positive cone) and the scaled unit sphere (in L2 norm). The method of [R1] to compute this projection is an alternating projection algorithm (similar to the Dykstra algorithm for convex sets). The method was proven correct by [R2], and additionally it was shown that the projection is unique almost everywhere. Therefore, the method of [R1] and Algorithm 2 of the paper (Sparse-opt) should almost always compute the same result. In the paper, however, the sparseness projection of [R1] is denoted the 'projection-heuristic' while Sparse-opt is called 'exact', and when the projection of [R1] is used in the SSNMF algorithm instead of Sparse-opt the reconstruction error is no more monotonically decreasing as optimization proceeds. As both projection algorithms should compute the same, the plot should be identical for them when using the same starting points. Section 5.2 of the paper should be enhanced to verify whether both algorithms actually compute the same result and to find the bug that causes this contradiction.
(2) The proposed Algorithm 2 can be considered a (non-trivial) extension of the projection onto a scaled probabilistic simplex as described by [R3] and is a valuable contribution. In the paper, there is however a bug in the execution (which may explain the discrepancies described in Issue (1)): There are no multipliers that enforce the entries of the projection to be non-negative, as would be required by Problem (5) in the paper. Analogously, in Algorithm 2 there is no check in the loop of Line 2 to guarantee the values for lambda and mu produce a feasible (that is non-negative) solution. I implemented the algorithm in Matlab and compared it to the sparseness projection of [R1] (which is freely available on the author's homepage). In the algorithm as given in the paper, p_star always equals m after line 3 and no correct solution to Problem (5) is found in general. If I add the check for a feasible solution, both Sparse-opt and the sparseness projection of [R1] compute numerically equal results. I first suspected there was a typo in the manuscript, but that still would not explain the contradictory results from Section 5.2 of the paper.
On the positive side, I did check the expressions for lambda, mu and obj as given in Algorithm 2, and found them correct. Further, the algorithm is empirically faster than that of [R1], and its run-time is guaranteed theoretically to be at most quasilinear.
Based on the bugfix, I realized that the method from [R4] could be adapted to Sparse-opt to further enhance its run-time efficiency: Set p_star to m before the for loop of line 2 (in case all elements of the projection will be non-zero). Then, after computation of lambda and mu (obj does not need to be computed anymore with this modification), check if a_p < -mu(p) holds. If it does, set p_star to p - 1 and break the for loop. Line 3 of the algorithm should then be omitted. This modification fixes the algorithm, and additionally obj is not needed, and for lambda and mu simple scalars are sufficient to store at most two values of each.
(3) As noted by Paul Shearer and confirmed by the first author of the paper (see public comments), the proof of Theorem 3 is flawed as the arguments there would only apply if the sparseness constraints would induce a convex set (which they don't). I wouldn't have any objections if Theorem 3 and its proof were withdrawn and removed from the manuscript.
Moreover, I verified Algorithm 3 from the paper and found no obvious bugs. I implemented all algorithms and ran them on the ORL face data set and found that SSNMF computes a sparse representation. I did not check what happens without the bugfix for Algorithm 2, though. The authors should definitely fix the major issues and repeat the experiments before publication (judging from the run-time given in Figure 3 and Figure 4 this shouldn't take too long).
There are some minor issues too:
- It should be briefly discussed whether SSNMF could benefit from a multi-threaded implementation as NMF/NMFSC do (in the experiments, the number of threads was set to one).
- Figures should be enlarged and improved such that a difference between the plots is also noticeable when printed in black and white on letter size paper.
- The references should be polished to achieve a consistent style (remove the URLs and ISSNs, don't use two different styles for JMLR ([7] and [10]) and NIPS ([6] and [17]), fix the page of [11], add volume and issue to [15] and [23], add the journal version of [21] unless that citation is withdrawn with Theorem 3, etc.).
- Always cite using numbers in the main text ('cite{}') instead of using only the author names ('citeauthor{}') (e.g. Hoyer, Kim and Park, etc.), because now [9], [10] and [13], [14] could be confused.
- The termination criteria should be described more elaborately for Algorithms 1, 3, and 4.
- Page 2, just after Expression (1): This is only a convex combination if the rows of H are normed (wrt. L1), otherwise it's a conical combination.
- Page 2, just after Expression (2): We use *subscripts* to denote... (missing s). Please also define what H_j^T would mean (is it (H_j)^T or (H^T)_j or something else?).
- It would be nice to add line numbers to all algorithms (some have ones, some don't).
- In Algorithm 3, Line 7: This should probably read G_j^T, as i is not defined here?
- Mention the number of images for the sMRI data set in Section 5.1, and use ' ' or a footnote for the URL there.
- Cite [7] in the third bullet point in Section 2.2.
References:
[R1] Hoyer. Non-negative Matrix Factorization with Sparseness Constraints. JMLR, 2004, vol. 5, pp. 1457-1469.
[R2] Theis et al. First results on uniqueness of sparse non-negative matrix factorization. EUSIPCO, 2005, vol. 3, pp. 1672-1675.
[R3] Duchi et al. Efficient Projections onto the l1-Ball for Learning in High Dimensions. ICML, 2008, pp. 272-279.
[R4] Chen & Ye. Projection Onto A Simplex. arXiv:1101.6081v2, 2011. |
aQZtOGDyp-Ozh | Learning Stable Group Invariant Representations with Convolutional
Networks | [
"Joan Bruna",
"Arthur Szlam",
"Yann LeCun"
] | Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do not account for all the variability of complex signal classes.
We show that the invariance properties built by deep convolutional networks can be cast as a form of stable group invariance. The network wiring architecture determines the invariance group, while the trainable filter coefficients characterize the group action. We give explanatory examples which illustrate how the network architecture controls the resulting invariance group. We also explore the principle by which additional convolutional layers induce a group factorization enabling more abstract, powerful invariant representations. | [
"variability",
"invariance group",
"convolutional networks",
"translations",
"rotations",
"express part",
"many recognition problems",
"group structure"
] | https://openreview.net/pdf?id=aQZtOGDyp-Ozh | https://openreview.net/forum?id=aQZtOGDyp-Ozh | s1Kr1S64z0s8a | review | 1,362,379,800,000 | aQZtOGDyp-Ozh | [
"everyone"
] | [
"anonymous reviewer 3316"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Stable Group Invariant Representations with Convolutional
Networks
review: This short paper presents a discussion on the nature and the type of invariances that are represented and learned by convolutional neural networks. It claims that the invariance a layer in a convolutional neural network can be expressed with a Lie group, and that the invariance of a deep convolutional neural network can be expressed with a product of groups.
This is a discussion paper that is difficult to understand without being familiar with group theory. It would be easier to read if there were even toy examples that illustrate the concepts presented in this work. In its current form the paper is incomplete; to be useful, it needs to use these ideas to somehow improve the training or generalization of convolutional neural networks. On a related note, it is hard to understand the significance of the results. So the invariance of a deep convolutional neural network can be expressed with a semi-direct product of some groups; this is nice, but what does it lead to, how can it be used?
To summarize, paper has intriguing ideas, but they are not sufficiently developed, and their significance is not clearly explained. |
aQZtOGDyp-Ozh | Learning Stable Group Invariant Representations with Convolutional
Networks | [
"Joan Bruna",
"Arthur Szlam",
"Yann LeCun"
] | Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do not account for all the variability of complex signal classes.
We show that the invariance properties built by deep convolutional networks can be cast as a form of stable group invariance. The network wiring architecture determines the invariance group, while the trainable filter coefficients characterize the group action. We give explanatory examples which illustrate how the network architecture controls the resulting invariance group. We also explore the principle by which additional convolutional layers induce a group factorization enabling more abstract, powerful invariant representations. | [
"variability",
"invariance group",
"convolutional networks",
"translations",
"rotations",
"express part",
"many recognition problems",
"group structure"
] | https://openreview.net/pdf?id=aQZtOGDyp-Ozh | https://openreview.net/forum?id=aQZtOGDyp-Ozh | uLsKzjPT0lx8V | review | 1,361,928,660,000 | aQZtOGDyp-Ozh | [
"everyone"
] | [
"anonymous reviewer bf60"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Stable Group Invariant Representations with Convolutional
Networks
review: I fully admit that I don't know enough about group theory to evaluate this submission. However, I do know about convolutional networks, so it is troubling that I can't understand it.
Since this is only a workshop paper, we're not going to look for a new reviewer.
When you do eventually pursue conference publication, I would suggest that you consider the audience and adapt the presentation somewhat, so that people who are familiar with convolutional networks but not with group theory will be able to get an idea of what the paper is about, and can read about the appropriate subjects to be able to understand it better.
I would also suggest providing a high level summary of the paper that makes it clear what you consider your original contributions to be. I had a hard time telling what was original content and what was just describing what convolutional networks are in group theory notation. |
aQZtOGDyp-Ozh | Learning Stable Group Invariant Representations with Convolutional
Networks | [
"Joan Bruna",
"Arthur Szlam",
"Yann LeCun"
] | Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring stability to additive and geometric perturbations of the input. Whereas physical transformation groups are ubiquitous in image and audio applications, they do not account for all the variability of complex signal classes.
We show that the invariance properties built by deep convolutional networks can be cast as a form of stable group invariance. The network wiring architecture determines the invariance group, while the trainable filter coefficients characterize the group action. We give explanatory examples which illustrate how the network architecture controls the resulting invariance group. We also explore the principle by which additional convolutional layers induce a group factorization enabling more abstract, powerful invariant representations. | [
"variability",
"invariance group",
"convolutional networks",
"translations",
"rotations",
"express part",
"many recognition problems",
"group structure"
] | https://openreview.net/pdf?id=aQZtOGDyp-Ozh | https://openreview.net/forum?id=aQZtOGDyp-Ozh | 7XaieIunN4X1I | review | 1,363,658,220,000 | aQZtOGDyp-Ozh | [
"everyone"
] | [
"Joan Bruna"
] | ICLR.cc/2013/conference | 2013 | review: I would like to thank the reviewers for their time and constructive comments.
Indeed, the paper, in its current form, explores the connection between deep convolutional networks and group invariance; but it lacks practical examples to motivate why this connection might be useful or interesting.
I completely agree in that the paper is difficult to read and could be made much more accessible. Together with the practical aspects mentioned in the last section, this will be my priority.
Thank you again. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | KB-5ppfbu7pwL | review | 1,362,170,160,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"anonymous reviewer 5a71"
] | ICLR.cc/2013/conference | 2013 | title: review of Cutting Recursive Autoencoder Trees
review: The paper considers the compositional model of Socher et al. (EMNLP 2011) for predicting sentence opinion polarity. The authors define several model simplification types (e.g., reducing the maximal number of levels) and study how these changes affect sentiment prediction performance. They also study how well the induced structures agree with human judgement, i.e. how linguistically plausible (both from syntactic and semantic point of view) the structures are.
I find this work quite interesting and the (main?) result somewhat surprising. What it basically shows is that the compositional part does not seem to benefit the sentence-level sentiment performance. In other words, a bag-of-words model with distributed word representations performs as well (or better). An additional, though somewhat small-scale, annotation study show that the model is not particularly accurate in representing opinion shifters (e.g., 'not' does not seem to reverse polarity reliably).
Though some of the choices of model simplification seem relatively arbitrary (e.g., why choosing a single subtree, rather than, e.g., dropping several subtrees within some budget?) and the human evaluation is somewhat small scale (and, consequently, not entirely convincing), I found the above observation interesting and important.
It would also be interesting to see if the compositional model appears to be more important when an actual syntactic tree (as in Socher et al (NIPS 2011) for paraphrase detection) is used instead of automatically inducing the structure.
One point which might be a little worrying is that the same parameters are used across different learning architectures, though one may expect that different regularizations and training regimes might be needed. However, the full model is estimated with the parameters chosen by the model designers on the same datasets, so it should not affect the above conclusion.
Pros:
-- It provides interesting analysis of the influential model of Socher et al (2011)
-- Both analysis of linguistic plausibility are provided and analysis of the effect of model components on sentiment prediction performance. Though, the original publication (Socher et al, EMNLP 2011) contained the BOW baseline it was not exactly comparable, the flat model studied here seems a more natural baseline.
Cons:
-- Semantic and syntactic coherence analysis may be too small scale to be seriously considered (2 human experts on a couple dozens of examples). |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | SPfmPG0ry9nrB | review | 1,362,361,260,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"anonymous reviewer 2611"
] | ICLR.cc/2013/conference | 2013 | title: review of Cutting Recursive Autoencoder Trees
review: This research analyses the Semi Supervised Recurive Autoencoder (RAE) of Socher et al., obtained with the NLP task of sentiment classification from sentences of movie reviews.
A first qualitative analysis conducted wth th help of human annotators, reveals that the syntactic and semantic role of reversers ('not') is not modeled well in many cases.
Then a systematic quantitative analysis is conducted, using the representation of a sentence as the average of the representation output at each node of the tree to train a classifier of sentiment, and analysing what is lost or gained by using only specific subsets of the tree nodes. These results clearly indicate that intermediate nodes bring no additional value for classfication performance compared to using only the word embeddings learned at the leaf nodes.
The full depth of the tree appears to extract no more useful information than the leaf nodes only.
Pros:
I believe that this paper's analysis is a significant contribution with an important message. It is well conducted and properly questions and sheds light on the meaning and usefulness of the tree *structure* learned by RAEs, showing that drastic structure simplifications yield the same state-of-the-art performance on the considered classification task. It has the potential to start a healthy controversy, that will surely seed interest into further investigation of this important point.
Cons:
The message would carry much more weight if a similar analysis of RAEs could be conducted also on several other (possibly more challenging) NLP tasks than movie sentiment classification and pointed towards similar conclusions. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | XHzDeHdtlbXIc | review | 1,362,455,040,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"Arun Tejasvi Chaganty"
] | ICLR.cc/2013/conference | 2013 | review: The paper presents a very interesting error analysis of recursive autoencoder
trees. However, I would wish the following aspects of the evaluation were
addressed.
a) In the qualitative analysis (Section 5), only 10 samples out of a corpus of
over 10,000 were studied. This is too small to make any statistically
significant statements.
b) When describing the behaviour on sentences with reversing constructions, it
is not clear how the RAE trees actually predicted the sentence; were the three
correct instances those with reversed sentiment, suggesting that the RAE trees
always reverse the sentiment when a reverser appeared?
c) Looking at the results from the quantitative analysis, the fact that the RAE
trees predict with a full 77.5% accuracy despite random feature embeddings
seems to be a strong signal that the parse structure is playing a very
important role. This conflicts with the qualitative analysis that
compositionality is not well modelled by RAEs. I feel more space should be
dedicated to discussing this result. If the real compositionality modelled by
the RAE trees is for intensifying constructors, it should be possible to
evaluate intensifying constructions by comparing the softmax classification
weights for a sentiment. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | fvJTwf6BDQvYu | review | 1,362,019,200,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"anonymous reviewer 5b0f"
] | ICLR.cc/2013/conference | 2013 | title: review of Cutting Recursive Autoencoder Trees
review: This paper analyzes recursive autoencoders for a binary sentiment analysis task.
The authors include two types of analyses: looking at example trees for syntactic and semantic structure and analyzing performance when the induced tree structures are cut at various levels.
More in depth analysis of these new models is definitely an interesting task. Unfortunately, the presented analyses and conclusions are incomplete or flawed.
Tree Cutting Analysis:
This experiment explores the interesting question of how important the word vectors and tree structures are for the RAE.
The authors incorrectly conclude that 'the strength of the RAE lies in the embeddings, not in the induced tree structure'.
This conclusion is reached by comparing the following two models (among others):
1) A word vector average model with no tree structures that uses about 50x10,000 parameters (50 dimensional word vectors and a vocabulary of around 10,000 words) and reaches 77.67% accuracy.
2) A RAE model with random word vectors that uses 50 x 100 parameters and gets 77.49% accuracy.
An accuracy difference of 0.18% on a test set of ~1000 trees means that ~2 sentences are classified differently and is not statistically significant. So the results of both models are the same.
That means that the RAE trees achieved the same performance with 1/100 of the parameters of the word vectors. So, the tree structures seem to work pretty well, even on top of random word vectors.
A good comparison would be between models with the same number of parameters. Both models could easily be increased or decreased in size.
One possible take away message could have been that the benefits of RAE tree structures and word embeddings are equal but performance does not increase when both are combined in a task that only has a single label for a full sentence.
But even that one is difficult:
All columns of the main results table (cutting trees) have the same top performance when it comes to statistical significance, so it would have also been good to look at another dataset.
Another problem is that the RAE induced vectors are only used by averaging all vectors in the tree.
More important analyses into the model could explore what the higher node vectors are capturing by themselves instead of only in an average with all lower vectors.
Tree Structure Analysis:
The first analysis is about the induced tree structures and finds that that they do not follow traditional parsing trees.
This was already pointed out in the original RAE paper and they show several examples of phrases that are cut off.
An interesting comparison here would have been to apply the algorithm on correct parse trees and analyze if it makes a difference in performance.
The second analysis is about sentiment reversal, such as 'not bad'.
Unfortunately, the given binary examples are hard to interpret.
Are phrases like 'not bad' positive in the original training data? It's not clear to me that 'not bad' is a very positive phrase.
Do the probabilities change in the right direction? When does it work and when does it not work? Is the sentiment of the negated phrase wrong or is the negation pushing in the wrong direction?
In order to understand what the model should learn, it would have been interesting to see if the effects are even in the training dataset.
Another interesting analysis would be to construct some simple examples where the reversal is much clearer like 'really not good'.
The paper is well written.
Only one typo: E_cE and E_eC both used. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | Od6cRb72yhb2P | review | 1,363,702,380,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"Christian Scheible"
] | ICLR.cc/2013/conference | 2013 | review: Thanks everyone for your comments! I would like to address some of the points made across various comments.
I would like to point out to reviewer 'Anonymous 5b0f' that, in the experiment 'noembed', while the embeddings are not used in the classifier, they are still learned during RAE training . Thus, to train the RAE, we do indeed need 50x100 + 50x10,000 parameters, thus making RAE training more complicated than using embeddings only. Training the RAE without any embeddings produces results similar to 'noembed' line 1.
Regarding the tree structures, we found that they do no influence the results too much. We achieve around 74% accuracy by simply enforcing iterative combinations from left to right using a one-sided recursion rule.
I agree with the point that a binary classification task less complicated than for example a structured-prediction task and thus is too simple to show an improvement with a structural model. I find the result interesting nevertheless, structural understanding should help in sentiment analysis -- at least from a linguistic point of view. However, the RAE model does not seem to capture these properties very well. Socher et al. presented a matrix-vector-based approach at EMNLP 2012 which addresses this problem and is more suitable for modeling compositionality.
It is true that the human evaluation is rather small-scale. We intended this analysis to illustrate the point. Regarding the point about one of the examples, I see that 'not bad' is in itself not too positive, but I (and our annotators) would think that 'not bad at all' is positive. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | 9IkTIwySTQw0C | review | 1,362,043,620,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"Sida Wang"
] | ICLR.cc/2013/conference | 2013 | review: I've also done some (unpublished) analysis with using random and degenerate tree structures and found that it did not matter very much under the RAE framework. I just have a short comment for the results table.
Given that most of the different schemes eventually got us roughly identical results near including the original RAE, 77.6% and knowing that Naive Bayes/Logistic Regression get an accuracy of around 78% with bag-of-words features and one can get over 79% with bag-of-bigrams features [Wang and Manning, Baselines and Bigrams: Simple, Good Sentiment and Topic Classification, ACL 2012].
It seems to me that one conclusion from these results is that many schemes will perform similarly once enough information is preserved in the training features. If around 80% accuracy is what a fairly general purpose machine learning algorithm can possibly be expected to do on this dataset without outside information, then one does not have to be very clever with a correct discriminative method to do just slightly worse than Naive Bayes/Logistic Regression.
Your results do suggest that the particular structure does not matter very here, neither does the embedding. But I think to really determine if the structure is doing anything, one should repeat this analysis in a place where the model with the structure is way better than the generic-linear-model-with-moderately-informative-features-benchmark, preferably without using extra knowledge. |
6s2YsOZPYcb8N | Cutting Recursive Autoencoder Trees | [
"Christian Scheible",
"Hinrich Schuetze"
] | Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment. | [
"recursive autoencoder trees",
"difficult",
"analysis",
"models",
"considerable success",
"natural language processing",
"deep architectures",
"useful representations",
"improvements",
"various tasks"
] | https://openreview.net/pdf?id=6s2YsOZPYcb8N | https://openreview.net/forum?id=6s2YsOZPYcb8N | vDY7MvZACzMTc | review | 1,362,181,620,000 | 6s2YsOZPYcb8N | [
"everyone"
] | [
"Sam Bowman"
] | ICLR.cc/2013/conference | 2013 | review: I was very impressed by some of these results—especially those for the noembed models—and this does seem to provide evidence that the high performance of RAEs on sentence-level binary sentiment classification need not reflect a breakthrough due to the use of tree structures.
There were a couple of points that I would like to see brought up:
There appears to have been follow up work by some of the same authors on RAE models for other related tasks, and it seems somewhat unfair to claim that 'the trees and the embeddings model the same phenomena,' when using one particularly uncomplicated domain of phenomena (binary sentiment) as a case study.
Less critically, I would like to see some discussion of (and further investigation into) the extremely poor performances seen with the sub and win models. If I understand correctly that the best-class baseline should achieve at least 50% accuracy, achieving a result substantially worse than seems to reflect a robust and potentially interesting result about the role of strongly positive and strongly negative words. |
ttxM6DQKghdOi | Discrete Restricted Boltzmann Machines | [
"Guido F. Montufar",
"Jason Morton"
] | In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code | [
"boltzmann machines",
"discrete",
"models",
"hidden variables",
"number",
"cardinalities",
"state spaces",
"probability distributions",
"products",
"simplices"
] | https://openreview.net/pdf?id=ttxM6DQKghdOi | https://openreview.net/forum?id=ttxM6DQKghdOi | uc6XK8UgDGKmi | review | 1,363,572,060,000 | ttxM6DQKghdOi | [
"everyone"
] | [
"Guido F. Montufar, Jason Morton"
] | ICLR.cc/2013/conference | 2013 | review: We appreciate the comments of all three reviewers. We posted a revised version of the paper to the arxiv (scheduled to be announced March 18 2013).
While reviewer 1922 found the paper ``comprehensive'' and ``clearly written'', reviewers e437 and fce0 were very concerned with the presentation of the paper, describing it as ``clearly not written for a machine learning audience'' and ``As it is, the paper does not cater to a machine learning crowd '' and recommended ``this paper should be submitted to a journal'' (e437) and ``I advise the authors to either: - submit it to an algebraic geometry venue - give as many intuitions as possible to help the reader get a full grasp on the results presented. '' (fce0).
Having these recommendations in mind we recognized how certain parts of the original paper might have been too technical to be presented in this venue. We decided to revise the paper focusing on the results that could be most interesting for ICLR, providing a more intuitive picture of the main results, and to treat the purely mathematical problems elsewhere.
We significantly shortened the paper from 20 to 11.5 pages + references. We reorganized the entire paper in order to improve the readability and reduce the number of definitions and concepts used throughout. In the revision we focus on the main results which do not require much mathematical background. Following the recommendation ``it is unreasonable to put all the proof in the supplementary material where they are unlikely to receive the necessary attention'' we included the proofs in the main part of the paper.
We appreciate the positive comments of reviewer 1922, which served as orientation for which of the results could be most interesting to present here in detail. Further, we thank reviewer 1922 for the literature suggestions regarding RBMs with interactions within layers and training, but in the re-organized paper we elected not to treat these topics. |
ttxM6DQKghdOi | Discrete Restricted Boltzmann Machines | [
"Guido F. Montufar",
"Jason Morton"
] | In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code | [
"boltzmann machines",
"discrete",
"models",
"hidden variables",
"number",
"cardinalities",
"state spaces",
"probability distributions",
"products",
"simplices"
] | https://openreview.net/pdf?id=ttxM6DQKghdOi | https://openreview.net/forum?id=ttxM6DQKghdOi | AAvOd8oYsZAh8 | review | 1,362,487,980,000 | ttxM6DQKghdOi | [
"everyone"
] | [
"anonymous reviewer fce0"
] | ICLR.cc/2013/conference | 2013 | title: review of Discrete Restricted Boltzmann Machines
review: This paper reviews properties of the Naive Bayes models and Binary RBMs before moving on to introducing discrete RBMs for which they extend universal approximation and other properties.
I think such a review and extensions are extremely interesting for the more theoretical fields such as algebraical geometry. As it is, the paper does not cater to a machine learning crowd as it's mostly a sequence of mathematical definitions and theorems statements. I advise the authors to either:
- submit it to an algebraic geometry venue
- give as many intuitions as possible to help the reader get a full grasp on the results presented.
For the latter point, I advise against using sentences such as 'In algebraic geometrical terms this is a Hadamard product of a collection of secant varieties of the Segre embedding of the product of a collection of projective spaces'. Though it sounds incredibly intelligent, I didn't get anything from it, despite my fair knowledge of RBMs.
This work of explaining the results is done fairly well in the Results section, especially for the universal approximation property and the approximation error. This is a good target for the review part of the paper. |
ttxM6DQKghdOi | Discrete Restricted Boltzmann Machines | [
"Guido F. Montufar",
"Jason Morton"
] | In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code | [
"boltzmann machines",
"discrete",
"models",
"hidden variables",
"number",
"cardinalities",
"state spaces",
"probability distributions",
"products",
"simplices"
] | https://openreview.net/pdf?id=ttxM6DQKghdOi | https://openreview.net/forum?id=ttxM6DQKghdOi | _YRe0x39e7YBa | review | 1,363,534,860,000 | ttxM6DQKghdOi | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: To the reviewers of this paper,
There appear to be some disagreement of the utility of the contributions of this paper to a machine learning audience.
Please read over the comments of the other reviewers and submit comment as you see fit. |
ttxM6DQKghdOi | Discrete Restricted Boltzmann Machines | [
"Guido F. Montufar",
"Jason Morton"
] | In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code | [
"boltzmann machines",
"discrete",
"models",
"hidden variables",
"number",
"cardinalities",
"state spaces",
"probability distributions",
"products",
"simplices"
] | https://openreview.net/pdf?id=ttxM6DQKghdOi | https://openreview.net/forum?id=ttxM6DQKghdOi | 86Fqwo3AqRw0s | review | 1,362,471,060,000 | ttxM6DQKghdOi | [
"everyone"
] | [
"anonymous reviewer 1922"
] | ICLR.cc/2013/conference | 2013 | title: review of Discrete Restricted Boltzmann Machines
review: This paper presents a comprehensive theoretical discussion on the approximation properties of discrete restricted Boltzmann machines. The paper is clearly written. It provides a contextual introduction to the theoretical results by reviewing approximation results for Naive Bayes models and binary restricted Boltzmann machines. Section 4 of the paper lists the theoretical contributions, while proofs are are delayed to the appendix.
Notably, the first result gives conditions, based on the number of hidden and visible units together with their cardinalities, for the joint RBM to be a universal approximator of distributions over the visible units. The theorem provides an extension to previous results for binary RBMs. The second result shows that discrete RBMs can represent distributions with a number of strong modes that is exponential in the number of hidden units, but not necessarily exponential in the number of parameters. The third result shows that discrete RBMs can approximate any mixture of product distributions, with disjoint supports, arbitrarily well.
Proposition 10 is a nice result showing that a discrete RBM is a Hadamard product of mixtures of product distributions. These decompositions often help with the design of inference algorithms. Lemma 25 provides useful connections between RBMs and mixtures. Subsequently theorem 27 discusses the relation to exponential families.
Theorem 29 provides a very nice approximation bound for the KL divergence between the RBM and a distribution in the set of all distributions over the discrete state space, and so on. The paper also presents a geometry analysis but I did not follow all the appendix details about these.
Finally the appendices discuss interactions within layers and training. With regard to the first issue, I think the authors should consult
H. J. Kappen. Deterministic learning rules for Boltzmann machines. Neural Networks, 8(4):537-548, 1995
which discusses these lateral connections and approximation properties. With regard to training, I recommend the following expositions to the authors. The last one considers a different aspect of the theory of RBMS, namely statistical efficiency of the estimators:
Marlin, Benjamin, Kevin Swersky, Bo Chen, and Nando de Freitas. 'Inductive principles for restricted boltzmann machine learning.' In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 509-516. 2010.
Tieleman, Tijmen, and Geoffrey Hinton. 'Using fast weights to improve persistent contrastive divergence.' In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1033-1040. ACM, 2009.
Marlin, Benjamin, and Nando de Freitas. 'Asymptotic Efficiency of Deterministic Estimators for Discrete Energy-Based Models'. UAI 2011.
The above provide a more clear picture of stochastic maximum likelihood as well as deterministic estimators.
Minor: Why does your paper end with a b?
In remark 5. It might be easier to simply use x throughout instead of v. |
ttxM6DQKghdOi | Discrete Restricted Boltzmann Machines | [
"Guido F. Montufar",
"Jason Morton"
] | In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and cardinalities of their state spaces, we bound the number of hidden variables, depending on the cardinalities of their state spaces, for which the model is a universal approximator of probability distributions. More generally, we describe tractable exponential subfamilies and use them to bound the maximal and expected Kullback-Leibler approximation errors of these models from above. We discuss inference functions, mixtures of product distributions with shared parameters, and patterns of strong modes of probability distributions represented by discrete restricted Boltzmann machines in terms of configurations of projected products of simplices in normal fans of products of simplices. Finally, we use tropicalization and coding theory to study the geometry of these models, and show that in many cases they have the expected dimension but in some cases they do not. Keywords: expected dimension, tropical statistical model, distributed representation, q-ary variable, Kullback-Leibler divergence, hierarchical model, mixture model, Hadamard product, universal approximation, covering code | [
"boltzmann machines",
"discrete",
"models",
"hidden variables",
"number",
"cardinalities",
"state spaces",
"probability distributions",
"products",
"simplices"
] | https://openreview.net/pdf?id=ttxM6DQKghdOi | https://openreview.net/forum?id=ttxM6DQKghdOi | gE0uE2A98H59Y | review | 1,360,957,080,000 | ttxM6DQKghdOi | [
"everyone"
] | [
"anonymous reviewer e437"
] | ICLR.cc/2013/conference | 2013 | title: review of Discrete Restricted Boltzmann Machines
review: The paper provides a theoretical analysis of Restricted Boltzmann Machines with multivalued discrete units, with the emphasis on representation capacity of such models.
Discrete RBMs are a special case of exponential family harmoniums introduced by Welling et al. [1] and have been known under the name of multinomial or softmax RBMs. The parameter updates given in the paper, which are its only not purely theoretical contribution, are not novel and have been known for some time. Though the authors claim that their analysis can serve as a starting point for developing novel machine learning algorithms, I am unable to see how that applies to any of the results in the paper. Thus the only contributions of the paper are theoretical.
Unfortunately, those theoretical contributions do not seem particularly interesting, at least from the machine learning perspective, appearing to be direct generalizations of the corresponding results for binary RBMs. The biggest problem with the paper, however, is presentation. The paper is clearly not written for a machine learning audience. The presentation is extremely technical and even the 'non-technical' outline in Section 4 is difficult to follow. Given that the only novel contribution of the paper is the results proved in it, it is unreasonable to put all the proof in the supplementary material where they are unlikely to receive the necessary attention. The fact that the proofs will not fit in the paper due to the ICLR page limit, simply highlights the fact that this paper should be submitted to a journal.
[1] Welling, M., Rosen-Zvi, M., & Hinton, G. (2005). Exponential family harmoniums with an application to information retrieval. Advances in Neural Information Processing Systems, 17, 1481-1488. |
jbLdjjxPd-b2l | Natural Gradient Revisited | [
"Razvan Pascanu",
"Yoshua Bengio"
] | The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature. | [
"natural gradient",
"aim",
"first",
"optimization",
"martens",
"subspace descent",
"vinyals",
"povey",
"implementations"
] | https://openreview.net/pdf?id=jbLdjjxPd-b2l | https://openreview.net/forum?id=jbLdjjxPd-b2l | 37JmPPz9dT39G | comment | 1,363,216,920,000 | uEQsuu1xiBueM | [
"everyone"
] | [
"Razvan Pascanu, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf
* Regarding the title, we have changed it to 'Revisiting Natural Gradient for Deep Networks', to better reflect the scope of the paper. We would like to thank you for the pointers on natural gradient for reinforcement learning and stochastic search (they have been incorporated in the new version).
* We have provided more details of our derivation in section 2. We've added a plot describing path taken by different learning algorithms in parameter space (natural gradient, gradient descent, Newton's method, Le Roux's natural gradient). The new plot can be found at the beginning of section 4. If you have another suggestion of drawing that would help to illustrate what is going on, we would be pleased to know about it.
* As you suggested, we have fixed the statement about the confusion between the two version of natural gradients in the past literature.
* We have slightly rephrased our arguments in section 6 to better point out our intuitions. Indeed the approximations involved in the algorithms have an important role in its behaviour and we clarified this.
However we were trying to say something different. We rephrased our argument as follows : robustness comes from the fact that we move in the direction of low variance for (d p(y|x))/(d theta) (the Fisher Information matrix is the uncentered covariance of these gradients). These are not the gradients of the error we minimize, but we argue that directions of high variance for these gradients, are directions of high variance for the gradients of the error function as well. Our reasoning is as follows. If moving in a direction `delta` would cause a large change in the gradient dL/dtheta, where L is the cost, this means that L(theta+delta) has to be 'very' different for different inputs.
But since L(theta) is just the evaluation of p(y|x) for particular values of
y for given x, this means that if L varies with x so does p(y|x).
This means `delta` has to be a direction of high variance for the metric. This is true even if you move with infinitesimal speed, as it is more about picking the direction of low variance. This formulation is based on the the same argument you provided yourself regarding our early-overfitting experiment. Note that large variations in p(y|x) should be reflected in large curvature of the KL, as it indicates that p changes quickly. We originally formulated the argument around this large changes of p. We agree however that our original argument could have been clearer and more complete, and we hope it is clearer in the new version.
* Regarding the use of unlabeled data, we added the proposed citation.
* We have provided both pseudo code in the appendix, and we have made the code available at git@github.com:pascanur/natgrad.git
* Regarding the early-overfitting experiment, we agree with the reviewer that natural gradient reduces variance overall. In terms of relative variance it seems that it does not make a big difference. We however emphasize that reducing overall variance is important, as it makes learning overall less sensitive to the order of training examples (which in some sense is related to the original problem, in the sense that it also reduces the importance of the early examples to the beheviour of the trained model). We agree that the original focus of the section was sligthly off, and we changed it to addressing the sensitivity of the model to the examples it sees during training. Thanks again for the comment.
* We did use a grid search for the other experiments, and used for each
algorithm the point on the grid that had the smallest validation error (a detail that we explicitly say in the paper).
We are however in the process of improving those results by extending our grid. |
jbLdjjxPd-b2l | Natural Gradient Revisited | [
"Razvan Pascanu",
"Yoshua Bengio"
] | The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive Natural Gradient from basic principles, contrasting the difference between the two version of the algorithm that are in the literature. | [
"natural gradient",
"aim",
"first",
"optimization",
"martens",
"subspace descent",
"vinyals",
"povey",
"implementations"
] | https://openreview.net/pdf?id=jbLdjjxPd-b2l | https://openreview.net/forum?id=jbLdjjxPd-b2l | uEQsuu1xiBueM | review | 1,362,372,600,000 | jbLdjjxPd-b2l | [
"everyone"
] | [
"anonymous reviewer 6f71"
] | ICLR.cc/2013/conference | 2013 | title: review of Natural Gradient Revisited
review: Summary
The paper reviews the concept of natural gradient, re-derives it in the context of neural network training, compares a number of natural gradient-based algorithms and discusses their differences. The paper's aims are highly relevant to the state of the field, and it contains numerous valuable insights. Precisely because of its topic's importance, however, I deplore its lack of maturity, especially in terms of experimental results and literature overview.
Comments
-- The title raises the expectation of a review-style paper with a broad literature overview on the topic, but that aspect is underdeveloped. A paper such as this would be a perfect opportunity to relate natural gradient-related work in neural networks to closely related approaches in reinforcement learning [1,2] and stochastic search [3].
-- The discussion in section 2 is correct and useful, but would benefit enormously from an illustrative figure that clarifies the relation between parameter space and distribution manifold, and how gradient directions differ in both. The last sentence (Lagrange method) is also breezing over a number of details that would benefit from a more explicit treatment.
-- There is a recurring claim that gradient-covariances are 'usually confused' with Fisher matrices. While there are indeed a few authors who did fall victim to this, it is not a belief held by many researchers working on natural gradients, please reformulate.
-- The information-geometric manifold is generally highly curved, which means that results that hold for infinitesimal step-sizes do not generally apply to realistic gradient algorithms with large finite steps. Indeed, [4] introduces an information-geometric 'flow' and contrasts it with its finite-step approximations. It is important to distinguish the effect of the natural gradient itself from the artifacts of finite-step approximations, indeed the asymptotic behavior can differ, see [5]. A number of arguments in section 6 could be revised in this light.
-- The idea of using more data to estimate the Fisher information matrix (because if does not need to be labeled), compared to the data necessary for the steepest gradient itself, is promising for semi-supervised neural network training. It was previously was presented in [3], in a slightly different context with infinitely many unlabeled samples.
-- The new variants of natural gradient descent should be given in pseudocode in the appendix, and if possible even with a reference open-source implementation in the Theano framework.
-- The experiment presented in Figure 2 is very interesting, although I disagree with the conclusions that are derived from it: the variance is qualitatively the same for both algorithms, just rescaled by roughly a factor 4. So, relatively speaking, the influence of early samples is still equally strong, only the generic variability of the natural gradient is reduced: plausibly by the effect that the Fisher-preconditioning reduces step-sizes in directions of high variance.
-- The other experiments, which focus on test-set performance, have a major flaw: it appears each algorithm variant was run exactly once on each dataset, which makes it very difficult to judge whether the results are significant. Also, the effect of hyper-parameter-tuning on those results is left vague.
Minor points/typos
-- Generally, structure the text such that equations are presented before they are referred to, this makes for a more linear reading flow.
-- variable n is undefined
-- clarify which spaces the variables x, z, t, theta live in.
-- 'three most typical'
-- 'different parametrizations of the model'
-- 'similar derivations'
-- 'plateaus'
-- axes of figures could be homogenized.
References
[1] 'Natural policy gradient', Kakade, NIPS 2002.
[2] 'Natural Actor-Critic', Peters and Schaal, Neurocomputing 2008.
[3] 'Stochastic Search using the Natural Gradient', Sun et al, ICML 2009.
[4] 'Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles', Arnold et al, Arxiv 2011.
[5] 'Natural Evolution Strategies Converge on Sphere Functions', Schaul, GECCO 2012. |