forum_id
stringlengths 8
20
| forum_title
stringlengths 1
899
| forum_authors
sequencelengths 0
174
| forum_abstract
stringlengths 0
4.69k
| forum_keywords
sequencelengths 0
35
| forum_pdf_url
stringlengths 38
50
| forum_url
stringlengths 40
52
| note_id
stringlengths 8
20
| note_type
stringclasses 6
values | note_created
int64 1,360B
1,737B
| note_replyto
stringlengths 4
20
| note_readers
sequencelengths 1
8
| note_signatures
sequencelengths 1
2
| venue
stringclasses 349
values | year
stringclasses 12
values | note_text
stringlengths 10
56.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets. | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | KOmskcVuMBOLt | review | 1,362,354,720,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer d6ae"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: I am *very* sympathetic to the aims of the authors:
Find simple, effective and fast deep networks to understand sensor data. The authors defer some of the more interesting bits to future work however: they note that sum-abs-diff should be much more efficient in silicon implementation then convolution style results. That would indeed be interesting, and would make this paper all the more exciting.
Methods that make such learning efficient are certainly important.
The paper references [8] to explain seemingly significant details. At least a summary seems in order.
Ideally a pseudo-code description. Currently the paper is close to un-reproducible, which is unfortunate as it seems easy to correct that.
Details of the contrast normalization would make the paper more self contained. This is a rather specialized technique (by that name), and should be discussed when addressing the community at large.
I can't parse the second paragraph of Section 2.3.
Many of the details elements seem a bit unmotivated. The authors repeatedly mention in the beginning (which seem like details too early in the paper) things they *don't* do (Whitening, ZCA?, color space changes), but these seem like details, and the motivation to *not* perform them isn't compelling. Why the difference in connectivity between the CNN and that presented here?
The video data-sets are very interesting and compelling. It would be good for a sense of scale to report the results for [22,30] as well.
The fact that second layers that are random still works well is interesting:
'The randomly connected 2nd layer used a fixed CNN layer as described in section 2.2.'
Why was this experiment with a random CNN, and not a random CL (sum-abs-diff) to match the experiments?
What does one conclude from these results?
It seems the first layer is, to a close approximate, equivalent to a Gabor filter bank. The second layer, random appears to be perfectly acceptable. (Truly random? How does randomly initialized from the data do?)
That seems rather disappointing from a learning point of view.
In general the paper reads a bit like an early draft of a workshop paper. Interesting experiments, but hard to read, and seemingly incomplete.
A few points on style seem in order:
First the authors graciously acknowledge prior work and say 'we could not have done any of this work without standing on the shoulders of giants.' Oddly, one of the giants acknowledged is one of the authors. I assume this is some last minute mistake due to authorship changes, but it reflects the rushed and incomplete nature of the work.
Similarly, the advertisements for the laboratory in the footnotes are odd and out of place in a conference submission.
The title seems slightly misleading: this seems to be a straightforward ML for vision paper with otherwise no connection to robotics.
Pros:
Tacking important issues for the field.
Good and interesting experiments.
Focus on performance is important.
Cons:
Difficult to understand details of implementation.
Many design decisions make it hard to compare/contrast techniques and seem unmotivated.
Some of the most interesting work (demonstrating benefits of technique in performance) are deferred to future work.
Style and writing make the paper difficult to read. |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robotic vision systems. We report its use in static image datasets and object track- ing datasets. We show that networks trained with clustering learning can outper- form large networks trained for many hours on complex datasets. | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | DGTnGO8CnrcPN | review | 1,362,366,000,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer d2a7"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: # Summary
This paper compares two types of filtering operator (linear filtering vs. distance filtering) in convolutional neural networks for image processing. The paper evaluates two fairly arbitrarily-chosen architectures on the CIFAR-10 and SVHN image labeling tasks, and shows that neither of these architectures is very effective, but that the conventional linear operator works better. The paper nevertheless advocates the use of the distance filtering operation on grounds of superior theoretical efficiency on e.g. FPGA hardware, but details of this argument and empirical substantiation are left for future work. The distance-based algorithm was more accurate than the linear-filtering architecture on a tracking task. How good the tracker is relative to other algorithms in the literature on the data set is not clear; I am admittedly not an expert in object tracking, and the authors simply state that it is 'not state-of-the-art.'
The paper's value as a report to roboticists on the merit of either clustering or linear operators is undermined by the lack of discussion or guidance regarding how one might go beyond the precise experiments done in the paper. The paper includes several choices that seem arbitrary: filter sizes, filter counts, numbers of layers, and so on. Moreover these apparently arbitrary choices are made differently for the different data sets. Compared to other papers dealing with these data sets, the authors have made the model much smaller, faster, and less accurate. The authors stress that the interest of their work is in enabling 'real-time' operation on a laptop, but I don't personally see the interest of targeting such CPUs for real-time performance and the paper does not argue the point.
The authors also emphasize the value of fast unsupervised learning based on clustering, but the contribution of this work beyond that of Coates et al. published in 2011 and 2012 is not clear.
# Detailed Comments
The statement 'We used the Torch7 software for all our experiments [18], since this software can reduce training and learning of deep networks by 5-10 times compared to similar Matlab and Python tools.' sounds wrong to me. A citation would help defend the statement, but if you meant simply to cite the benchmarking from [18], then the authors should also cite follow-up work, particularly Bastien et al. 2012 ('Theano: new features and speed improvements').
The use of 'bio-inspired' local contrast normalization instead of whitening should include citation to previous work. (i.e. Why/how is the technique inspired by biology?)
Is the SpatialSAD model the authors' own innovation? If so, more details should be listed. If not, a citation to a publication with more details should be listed. I have supposed that they are simply computing a squared Euclidean distance between filter and image patch as the filter response.
Regarding the two architectures used for performance comparison - I am left wondering why the authors chose not to use spatial contrastive normalization in both architectures. As tested, performance differences could be attributed to *either* the CL or the spatial contrast normalization.
I am a little confused by the phrase 'correlate filter responses to inputs' - with the sum-of-squared-differences operator at work, my intuition would be that inputs are less-correlated to filter responses than they would be with a convolution operator.
The use of the phrase 'fully connected' in the second-last paragraph on page 3 is confusing - I am assuming the authors mean all *channels* are connected to all *channels* in filters applied by the convolution operator. Usually in neural networks literature, the phrase 'fully connected' means that all *units* are connected between two layers.
The results included no discussion of measurement error. |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space. | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | RRH1s5U_dcQjB | review | 1,362,378,120,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer 06d9"
] | ICLR.cc/2013/conference | 2013 | review: NA |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space. | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | cjiVGTKF7OjND | review | 1,362,402,060,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer f473"
] | ICLR.cc/2013/conference | 2013 | title: review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
review: The paper proposes to learn the weights of the pooling region in a neural network for recognition. The idea is a good one, but the paper is a bit terse. Its not really clear what we are looking at in Figure 1b - the different quadrants and so forth - but I would guess the red blobs are the learned pooling regions. Its kind of what you expect, so it also begs the question of whether this teaches us anything new. But still it seems like a sensible approach and worth reporting. I suppose one can view it as a validation of the pooling envelope that is typically assumed. |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propose the use of supervised error signals from gradient descent to tune the established maps within the model. This technique allows us to learn what would otherwise be a design choice within the model and specialize the maps to aggregate areas of invariance for the task presented. Preliminary results show moderate potential gains in classification accuracy and highlight areas of importance within the intermediate feature representation space. | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | 55Y25pcVULOXK | review | 1,362,378,060,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer 06d9"
] | ICLR.cc/2013/conference | 2013 | title: review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
review: The paper by Rose & Arel entitled 'Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models' describes a new approach for optimizing hyper parameters in spatial pyramid-like architectures.
Specifically, an architecture is presented which corresponds to a spatial pyramid where a two-layer neural net replaces the SVM in the final classification stage. The key contribution is to formulate the pooling operation in the spatial pyramid as a weighted sum over inputs which enables learning of the pooling receptive fields via back-propagation.
Pros: The paper addresses an important problem in the field of computer vision. Spatial pyramids are currently quite popular in the computer vision community and optimizing the many free parameters which are normally tuned by hand is a key problem.
Cons: The contribution of the present work remains very limited both in terms of the actual problem formulation and the empirical evaluation (no comparison to alternative approaches such as the recent work by Jia et al [ref 5] is shown). The overall 0.5% improvement in accuracy over non-optimized hyper-parameters is quite disappointing. In future work, the authors should compare their approach with alternative approaches in addition to suggesting significant improvement over non-optimized/standard parameters.
Additional comments: Additional references that could be added, discussed and/or used for benchmark. Y Boureau and J Ponce. A theoretical analysis of feature pooling in visual recognition. In ICML, 2010 and Pinto N, Doukhan D, DiCarlo JJ, Cox DD (2009). A High-Throughput Screening Approach to Discovering Good Forms of Biologically-Inspired Visual Representation. PLoS Computational Biology 5(11): e1000579. doi:10.1371/journal.pcbi.1000579. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | nYshYtAXG48ze | review | 1,364,786,880,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: I want to say thanks again to the conference organizers, reviewers and openreview.net developers for doing a great job.
I have updated the code on my webpage to include two additional features: max norm weight clipping and training deep autoencoders. Autoencoder training uses symmetric encoding / decoding and supports denoising and L2 penalties. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | mm_3mNH6nD4hc | review | 1,363,601,400,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: I have submitted an updated version to arxiv and should appear shortly. My apologies for the delay. From the suggestion of reviewer 0a71 I've renamed the paper to 'Training Neural Networks with Dropout Stochastic Hessian-Free Optimization'. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | TF3miswPCQiau | review | 1,362,400,260,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer f834"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: This paper looks at designing an SGD-like version of the 'Hessian-free' (HF) optimization approach which is applied to training shallow to moderately deep neural nets for classification tasks. The approach consists of the usual HF algorithm, but with smaller minibatches and with CG terminated after only 3-5 iterations. As advocated in [20], more careful attention is paid to the 'momentum-constant' gamma.
It is somewhat interesting to see a very data intensive method like HF made 'lighter' and more SGD-like, since this could perhaps provide benefits unique to both HF and SGD, but it's not clear to me from the experiments if there really is an advantage over variants of SGD that would perform some kind of automatic adaptation of learning rates (or even a fixed schedule!). The amount of novelty in the paper isn't particularly high since many of these ideas have been proposed before ([20]), although perhaps in less extreme or less developed forms.
Pros:
- takes the well-known approach HF in a different (if not entirely novel) direction
- seems to achieves performance competitive with versions of SGD used in [3] with dropout
Cons:
- experiments don't look at particularly deep models and aren't very thorough
- comparisons to other versions of SGD are absent (this is my primary issue with the paper)
----
The introduction and related work section should probably clarify that HF is an instance of the more general family of methods sometimes known as 'truncated-Newton methods'.
In the introduction, when you state: 'HF has not been as successful for classification tasks', is this based on your personal experience, particularly negative results in other papers, or lack of positive results in other papers?
Missing from your review are papers that look at the performance of pure stochastic gradient descent applied to learning deep networks, such as [15] did, and the paper by Glorot and Bengio from AISTATS 2010. Also, [18] only used L-BFGS to perform 'fine-tuning' after an initial layer-wise pre-training pass.
When discussing the generalized Gauss-Newton matrix you should probably cite [7].
In section 4.1, it seems like a big oversimplification to say that the stopping criterion and overall convergence rate of CG depend on mostly on the damping parameter lambda. Surely other things matter too, like the current setting of the parameters (which determine the local geometry of the error surface). A high value of lambda may be a sufficient condition, but surely not a necessary one for CG to quickly converge. Moreover, missing from the story presenting in this section is the fact that lambda *must* decrease if the method is to ever behave like a reasonable approximation of a Newton-type method.
The momentum interpretation discussed in the middle of section 4, and overall the algorithm discussed in this paper, sounds similar to ideas discussed in [20] (which were perhaps not fully explored there). Also, a maximum iteration for CG is was used in the original HF paper (although it only appeared in the implementation, and was later discussed in [20]). This should be mentioned.
Could you provide a more thorough explanation of why lambda seems to shrink, then grow, as optimization proceeds? The explanation in 4.2 seems vague/incomplete.
The networks trained seem pretty shallow (especially Reuters, which didn't use any hidden layers). Is there a particular reason why you didn't make them deeper? e.g. were deeper networks overfitting more, or perhaps underfitting due to optimization problems, or simply not providing any significant advantage for some other reasons? SGD is already known to be hard to beat for these kinds of not-very-deep classification nets, and while it seems plausible that the much more SGD-like HF which you are proposing would have some advantage in terms of automatic selection of learning rates, it invites comparison to other methods which do this kind of learning rate tuning more directly (some of which you even discuss in the paper). The lack of these kinds of comparisons seems like a serious weakness of the paper.
And how important to your results was the use of this 'delta-momentum' with the particular schedule of values for gamma that you used? Since this behaves somewhat like a regular momentum term, did you also try using momentum in your SGD implementation to make the comparison more fair?
The experiments use drop-out, but comparisons to implementations that don't use drop-out, or use some other kind of regularization instead (like L2) are noticeably absent. In order understand what the effect of drop-out is versus the optimization method in these models it is important to see this.
I would have been interested to see how well the proposed method would work when applied to very deep nets or RNNs, where HF is thought to have an advantage that is perhaps more significant/interesting than what could be achieved with well tuned learning rates. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | gehZgYtw_1v8S | review | 1,362,161,760,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer 0a71"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: Summary and general overview:
----------------------------------------------
The paper tries to explore an online regime for Hessian Free as well as using drop outs. The new method is called Stochastic Hessian Free and is tested on a few datasets (MNIST, USPS and Reuters).
The approach is interesting and it is a direction one might need to consider in order to scale to very large datasets.
Questions:
---------------
(1) An aesthetic point. Stochastic Hessian Free does not seem as a suitable name for the algorithm, as it does not mention the use of drop outs. I think scaling to a stochastic regime is an orthogonal issue to using drop outs, so maybe Drop-out Stochastic Hessian Free would be more suitable, or something rather, that makes the reader aware of the use of drop-outs.
(2) Page 1, first paragraph. Is not clear to me that SGD scales well for large data. There are indications that SGD could suffer, for e.g., from under-fitting issues (see [1]) or early over-fitting (see [2]). I'm not saying you are wrong, you are probably right, just that the sentence you use seems a bit strong and we do not yet have evidence that SGD scales well to very large datasets, especially without the help of things like drop-outs (which might help with early-overfitting or other phenomena).
(3) Page 1, second paragraph. Is not clear to me that HF does not do well for classification. Is there some proof for this somewhere? For e.g. in [3] a Hessian Free like approach seem to do well on classification (note that the results are presented for Natural Gradient, but the paper shows that Hessian Free is Natural Gradient due to the use of Generalized Gauss-Newton matrix).
(4) Page 3, paragraph after the formula. R-operator is only needed to compute the product of the generalized Gauss-Newton approximation of the Hessian with some vector `v`. The product between the Hessian and some vector 'v' can easily be computed as d sum((dC/dW)*v)/dW (i.e. without using the R-op).
(5) Page 4, third paragraph. I do not understand what you mean when you talk about the warm initialization of CG (or delta-momentum as you call it). What does it mean that hat{M}_ heta is positive ? Why is that bad? I don't understand what this decay you use is suppose to do? Are you trying to have some middle ground between starting CG from 0 and starting CG from the previous found solution? I feel a more detailed discussion is needed in the paper.
(6) Page 4, last paragraph. Why does using the same batch size for the gradient and for computing the curvature results in lambda going to 0? Is not obvious to me. Is it some kind of over-fitting effect? If it is just an observation you made through empirical experimentation, just say so, but the wording makes it sound like you expect this behaviour due to some intuitions you have.
(7) Page 5, section 4.3. I feel that the affirmation that drop-outs do not require early stopping is too strong. I feel the evidence is too weak at the moment for this to be a statement. For one thing, eta_e goes exponentially fast to 0. eta_e scales the learning rate, and it might be the reason you do not easily over-fit (when you reach epoch 50 or so you are using a extremely small learning rate). I feel is better to make this as an observation. Also could you maybe say something about this decaying learning rate, is my understanding of eta_e correct?
(8) I feel a important comparison would be between your version of stochastic HF with drop-outs vs stochastic HF (without the drop outs) vs just HF. From the plots you give, I'm not sure what is the gain from going stochastic, nor is it clear to me that drop outs are important. You seem to have the set-up to run this additional experiments easily.
Small corrections:
--------------------------
Page 1, paragraph 1, 'salable` -> 'scalable'
Page 2, last paragraph. You wrote : 'B is a curvature matrix suc as the Hessian'. The curvature of a function `f` at theta is the Hessian (there is no choice) and there is only one curvature for a given function and theta. There are different approximations of the Hessian (and hence you have a choice on B) but not different curvatures. I would write only 'B is an approximation of the curvature matrix` or `B is the Hessian`.
References:
[1] Yann N. Dauphin, Yoshua Bengio, Big Neural Networks Waste Capacity, arXiv:1301.3583
[2] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent and Samy Bengio, Why Does Unsupervised Pre-training Help Deep Learning? (2010), in: Journal of Machine Learning Research, 11(625--660)
[3] Razvan Pascanu, Yoshua Bengio, Natural Gradient Revisited, arXiv:1301.3584 |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | lcfIcbYPqX3P7 | review | 1,367,022,720,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers,
To better account for the mentioned weaknesses of the paper, I've re-implemented SHF with GPU compatibility and evaluated the algorithm on the CURVES and MNIST deep autoencoder tasks. I'm using the same setup as in Chapter 7 of Ilya Sutskever's PhD thesis, which allows for comparison against SGD, HF, Nesterov's accelerated gradient and momentum methods. I'm going to make one final update to the paper before the conference to include these new results. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | av7x0igQwD0M- | review | 1,362,494,640,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for your comments!
To Anonymous 0a71:
---------------------------------
(1,8): I agree. Indeed, it is straightforward to add an additional experiment without the use of dropout. At the least, the experimental section can be modified to indicate whether the method is using dropout or not instead of simply referring to 'stochastic HF'.
(2): Fair point. It would be interesting trying this method out in a similar experimental setting as [R1]. Perhaps it may give some insight on the paper's hypothesis that the optimization is the culprit to underfitting.
(3): Correct me if I'm wrong but the only classification results of HF I'm aware of are from [R2] in comparison with Krylov subspace decent, not including methods that refer to themselves as natural gradient. Minibatch overfitting in batch HF is problematic and discussed in detail in [R5], pg 50. Given the development of [R3], the introduction could be modified to include additional discussion regarding the relationship with natural gradient and classification settings.
(5): Section 4.5 of [R4] discusses the benefits of non-zero CG initializations. In batch HF, it's completely reasonable to fix gamma throughout training (James uses 0.95). This is problematic in stochastic HF due to such a small number of CG iterations. Given a non-zero CG initialization and a near-one gamma, hat{M}_ heta may be more likely to remain positive after CG and assuming f_k - f_{k-1} < 0, means that the reduction ratio will be negative and thus lambda will be increased to compensate. This is not necessarily a bad thing, although if it happens too frequently the algorithm will began to behave more like SGD (and in some cases the linesearch will reject the step). Setting gamma to some smaller initial value and incrementing at each epoch, based on empirical performance, allows for near-one delta values late in training without negating the reduction ratio. I refer the reader to pg.28 and pg.39 in [R5], which give further motivation and discussion on these topics.
(6): Using the same batches for gradients and curvature have some theoretical advantages (see section 12.1, pg.48 of [R5] for derivations). While lambda -> 0 is indeed an empirical observation, James and Ilya also report similar behaviour for shorter CG runs (although longer than what I use) using the same batches for gradients and curvature (pg.54 of [R5]). Within the proposed stochastic setting, having lambda -> 0 doesn't make too much sense to me (at least for non-convex f). It could allow for much more aggressive steps which may or may not be problematic given how small the curvature minibatches are. One solution is to simply increase the batch sizes, although this was something I was intending to avoid.
(7): The motivation behind eta_e was to help achieve more stable training over the stochastic networks induced using dropout. You are probably right that 'not requiring early stopping' is way too strong of a statement.
To Anonymous 4709:
---------------------------------
Due to the additional complexity of HF compared to SGD, I attempted to make my available (Matlab) code as easy as possible to read and follow through in order to understand and reproduce the key features of the method.
While an immediate advantage of stochastic HF is not requiring tuning learning rate schedules, I think it is also a promising approach in further investigating the effects of overfitting and underfitting with optimization in neural nets, as [R1] motivates. The experimental evaluation does not attack this particular problem, as the goal was to make sure stochastic HF was at least competitive with SGD dropout on standard benchmarks. This to me was necessary to justify further experimentation.
There is no comparison with the results of [R4] since the goal of the paper was to focus on classification (and [R4] only trains on deep autoencoders). Future work includes extending to other architectures, as discussed in the conclusion.
I mention on pg. 7 that the per epoch update times were similar to SGD dropout (I realize this is not particularly rigorous).
In regards to evaluating each of the modifications, I had hoped that the discussion was enough to convey the importance of each design choice. I realize now that there might have been too much assumption of information discussed in [R5]. These details will be made clear in the updated version of the paper with appropriate references.
To Anonymous f834:
--------------------------------
- Thanks for the reference clarifications. In regards to classification tasks, see (3) in my response to Anonymous 0a71.
- Indeed, much of the motivation of the algorithm, particularly the momentum interpretation, came from studying [R5] which expands on HF concepts in significantly more detail then the first publications allowed for. I will be sure to make this more clear in the relevant sections of the paper.
- I agree that not comparing against other adaptive methods is a weakness and discussed this briefly in the conclusion. To accommodate for this, I tried to use an SGD implementation that would at least be as competitive (dropout, max-norm weight clipping with large initial rates, momentum and learning rate schedules). Weight clipping was also shown to improve SGD dropout, at least on MNIST [R6].
- Unfortunately, I don't have too much more insight on the behaviour of lambda though it appears to be quite consistent. The large initial decrease is likely to come from conservative initialization of lambda which works well as a default.
- I did not test on deeper nets largely due to time constraints (it made more sense to me to start on shallower networks then to 'jump the gun' and go straight for very deep nets) . Should I not have done this? As alluded to in the conclusion, I wouldn't be expecting any significant gain on these datasets (perhaps I'm wrong here). It would be cool to try on some speech data where deeper nets have made big improvements but I haven't worked with speech before. Reuters didn't use hidden layers due to the high dimensionality of the inputs (~19000 log word count features). Applying this to RNNs is a work in progress.
----------------------------------------------
To summarize (modifications for the paper update):
- include additional references
- add results for stochastic HF with no dropout
- some additional discussion on the relationship with natural gradient (and classification results)
- better detail section 4, including additional references to [R5]
These modifications will be made by the start of next week (March 11).
One additional comment: after looking over [R6], I realized the MNIST dropout SGD results (~110 errors) were due to a combination of dropout and the max-norm weight clipping and not just dropout alone. I have recently been exploring using weight clipping with stochastic HF and it is advantageous to include it. This is because it allows one to start training with smaller lambda values, likely in the same sense as it allows SGD to start with larger learning rates. I will be updating the code shortly to include this option.
[R1] Yann N. Dauphin, Yoshua Bengio, Big Neural Networks Waste Capacity, arXiv:1301.3583
[R2] O. Vinyals and D. Povey. Krylov subspace descent for deep learning. arXiv:1111.4259, 2011
[R3] Razvan Pascanu, Yoshua Bengio, Natural Gradient Revisited, arXiv:1301.3584
[R4] J. Martens. Deep learning via hessian-free optimization. In ICML 2010.
[R5] J. Martens and I. Sutskever. Training deep and recurrent networks with hessian-free optimization. Neural Networks: Tricks of the Trade, pages 479–535, 2012.
[R6] N. Srivastava. Improving Neural Networks with Dropout. Master's thesis, University of Toronto, 2013. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | 3nHzayPmAI5r1 | comment | 1,363,585,560,000 | av7x0igQwD0M- | [
"everyone"
] | [
"anonymous reviewer 0a71"
] | ICLR.cc/2013/conference | 2013 | reply: Regarding using HF for classification. My point was that lack of results in the literature about classification error with HF might be just due to the fact that this is a new method, arguably hard to implement and hence not many had a chance to play with it. I'm not sure that just using HF (the way James introduced it) would not do well on classification. I feel I didn't made this clear in my original comment. I would just remove that statement. Looking back on [R2] I couldn't find a similar statement, it only says that empirically KSD seems to do better on classification.
Also I see you have not updated the arxiv papers. I would urge you to do so, even if you do not have all the new experiments ready. It would be helpful for us the reviewers to see how you change the paper. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | UJZtu0oLtcJh1 | review | 1,362,391,800,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer 4709"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: This paper makes an attempt at extending the Hessian-free learning work to a stochastic setting. In a nutshell, the changes are:
- shorter CG runs
- cleverer information sharing across CG runs that has an annealing effect
- using differently-sized mini-batches for gradient and curvature estimation (former sizes being larger)
- Using a slightly modified damping schedule for lamdba than Martens' LM criteria, which encourages fewer oscillations.
Another contribution of the paper is the integration of dropouts into stochastic HF in a sensible way. The authors also include an exponentially-decaying momentum-style term into the parameter updates.
The authors present but do not discuss results on the Reuters dataset (which seem good). There is also no comparison with the results from [4], which to me would be a natural thing to compare to.
All in all, a series of interesting tricks for making HF work in a stochastic regime, but there are many questions which are unanswered. I would have liked to see more discussion *and* experiments that show which of the individual changes that the author makes are responsible for the good performance. There is also no discussion on the time it takes the stochastic HF method to make on step / go through one epoch / reach a certain error.
SGD dropout is a very competitive method because it's fantastically simple to implement (compared to HF, which is orders of magnitude more complicated), so I'm not yet convinced by the insights of this paper that stochastic HF is worth implementing (though it seems easy to do if one has an already-running HF system). |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates. | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | CUXbqkRcJWqcy | review | 1,360,514,640,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Code is now available: http://www.ualberta.ca/~rkiros/
Included are scripts to reproduce the results in the paper. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs. | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | UvnQU-IxtJfA2 | review | 1,362,163,620,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"anonymous reviewer ba30"
] | ICLR.cc/2013/conference | 2013 | title: review of A Geometric Descriptor for Cell-Division Detection
review: Goal: automatically spot the point in a video sequence where a cell-division occurs.
Interesting application of deep networks. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs. | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | ddQbtyHpiUz9Z | review | 1,362,034,500,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"David Warde-Farley"
] | ICLR.cc/2013/conference | 2013 | review: The proposed method appears to be an engineered descriptor that doesn't involve any learning. While the application is interesting, ICLR is probably not an appropriate venue. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs. | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | uVT9-IDrqY-ci | review | 1,362,198,120,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"anonymous reviewer 3bab"
] | ICLR.cc/2013/conference | 2013 | title: review of A Geometric Descriptor for Cell-Division Detection
review: This paper aims to annotate the point at which cells divide in a video sequence.
Pros:
- a useful and interesting application.
Cons:
- it does not seem to involve any learning, it clearly does not fit at ICLR.
- no comparison to other systems nor description of the dataset, nor cross-validation.
- the results are not that impressive considering they are not that far from the results of a simple image difference. I think a learnt model would perform better at this task. |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | KktHprTPH5p6q | review | 1,363,851,960,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8b9c"
] | ICLR.cc/2013/conference | 2013 | review: One additional comment is that the work bears some similarities to Hinton's recent work on 'capsules' and it may be worth citing that paper:
Hinton, G. E., Krizhevsky, A. and Wang, S. (2011)
Transforming Auto-encoders.
ICANN-11: International Conference on Artificial Neural Networks, Helsinki.
http://www.cs.toronto.edu/~hinton/absps/transauto6.pdf |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | Gp6ETkwghDG9l | comment | 1,363,646,700,000 | DdhjdI7FMGDFT | [
"everyone"
] | [
"anonymous reviewer 8ed7"
] | ICLR.cc/2013/conference | 2013 | reply: The authors have improved the paper, addressing many of the issues I brought up. I would modify my review to be Neutral; if that is not an acceptable evaluation, then I modify my review to a Weak Accept. I am only posting this response to the poster asking for an updated evaluation, because I am not sure if I am supposed to make this modification public.
I still have a couple of comments:
1. The authors include a description of the convolution sparse coding techniques, such as SISC, which better compares their contribution to related work. SISC is not a real competitor to JADL, because it is too computationally intensive; however, in the synthetic experiments, it would be useful to include it in the comparison. If SISC outperformed JADL, it would not invalidate the usefulness of JADL (which is the only one that can be applied to large datasets), but would give a much better understanding of the properties of JADL versus these previous convolution approaches.
2. The paper is over length, but I assume that will be fixed. |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | 3yWm3DNg8o3fu | review | 1,363,126,320,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine Saillet, Christian Bénar, Théodore Papadopoulo"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their constructive comments. We submitted a new version of the paper to arXiv, which should be made available on Wednesday, March 13. As one major change we now point out the similarity to convolutional/shift-invariant sparse coding (SISC)*, but also mention the differences mainly introduced by the l_0 sparsity constraint. A new contribution is an analysis of the algorithm's complexity as well as possibilities for speed ups – although the computation time was already low for the conducted experiments, this could become an issue for real-time analysis. The changes in detail:
[1] Smith, Evan, and Michael S. Lewicki. 'Efficient coding of time-relative structure using spikes.' Neural Computation 17.1 (2005): 19-45.
[2] Blumensath, Thomas, and Davies, Mike. 'Sparse and shift-invariant representations of music.' Audio, Speech, and Language Processing, IEEE Transactions on 14.1 (2006): 50-57.
[3] R. Grosse, R. Raina, H. Kwong, and AY Ng, 'Shift-invariant sparse coding for audio classification,' in Proceedings of the Twenty-third Conference on Uncertainty in Artificial Intelligence (UAI'07), 2007
[4] Ekanadham, Chaitanya, Daniel Tranchina, and Eero P. Simoncelli. 'Sparse decomposition of transformation-invariant signals with continuous basis pursuit.' Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011.
Introduction
The second part of the introduction has been rewritten. Shift-invariant sparse coding (SISC) is introduced and its differences to JADL are pointed out. Most significant is the constraint in JADL, that only one shifted version of each atom shall be active per signal. As a consequence, JADL leads to a less complex algorithm (in both, sparse coding and dictionary update step), which in contrast to SISC does not need heuristic preselection of active atoms. In addition, we remarked that JADL is designed to learn only few atoms, in contrast to most dictionary learning applications. Hence, the term “sparsity” only makes sense with respect to the “unrolled” dictionary. However, for the most part this sparsity is achieved by the explicit constraint, not by sparse regularization.
Section 3, JADL
In the sparse coding section, the computational advantages of the modified LARS have been pointed out and contrasted with SISC.
In the dictionary update section, we noted that the JADL formulation leads to an update of same complexity as regular DL; this does not apply to SISC. This fact could be used by changing the ratio of sparse coding and dictionary update steps in favor to the latter, i.e. by employing mini-batch or online techniques.
Section 4, Experiments
For the real data, the computation time has been investigated for different K (number of atoms) and S (number of allowed shifts). The curve shows linear correlation with S; when increasing K, however, the computation time increases more than linear. This is due to the following: While both S and K affect the size of the unrolled dictionary, an increase in S is handled efficiently in the JADL formulation, as mentioned in the sections sparse coding and dictionary update above. We also mentioned that the computation time for the conducted experiments was very small (4.3 seconds for the real data), hence computational complexity should not become an issue for offline analysis. Employing the proposed speed ups and a further optimization of the code, on the other hand, could even allow for real time analysis (this could be desirable especially for M/EEG-based brain computer interfaces (BCI)) .
Detailed responses to the reviewers' comments
-------------------------------------------------------------
Anonymous 8ed7
-------------------------
Cons
1. [Computational requirements] As mentioned above, we investigated computation time empirically for different values of S and K. Even for K=15 and S=200 (i.e. 3000 elements in the unrolled dictionary ) the computation time remained less than 1 minute for 200 iterations. Therefore, the increased complexity should only matter for real-time applications, for which we proposed several speed-ups in the formulation of the algorithm.
2. (a) [Examples for shifts] We added a comment (footnote) on possible definitions of the shifts in the problem statement, there are different ways how to define the shifts at the borders. As the JADL framework is general enough to allow even arbitrary linear transforms, we do not specify a certain definition at this point. A detailed discussion of the right way to handle boundary effects would be out of the scope of the paper; however, we found that in our experiments the choice of the shifts did not affect the outcome significantly.
(b) [Examples for types of data that can be analyzed with JADL] In the introduction, we now mention similar properties to those of neuroelectric data in different bioelectric or biomagnetic signals, such as ECG, EMG. [explain “features well-aligned across signals”] We changed this formulation to “each waveform of interest occurs approximately at the same time in every trial”. Is this sufficiently clear?
(c) [Improve explanation how to enforce constraint (7)] changed as suggested
3. [On the importance of lambda] We agree that if the number of atoms is large, the parameter lambda has an important role to ensure sparsity. However, JADL is designed to learn only a small number K of atoms. This is due to several reasons: (i) the jitter-adaptivity ensures a compact representation, as a waveform that is shifted throughout signals can be encoded in a single atom; (ii) for the applications JADL is aimed at, it is either not desired or not feasible to learn many atoms, since (a) the dictionary should be easily interpretable and reveal the main activity across signals (b) the number of training examples M is limited and K<<M must hold to prevent from overfitting, which is a particularily critical aspect due to often high noise level. A similar comment on the different use of “sparsity” in JADL has been added to the introduction to get this point clear.
4. [Comparison to common DL with large K and large lambda] We agree that a comparison to DL with a similar K used as in JADL would not be fair. For the simulated data, different values of K have therefore been used, and lambda has been optimized individually for each K to yield the smallest error/highest similarity w.r.t. the ground truth. The table stops at K=12; for larger K performance becomes worse for all three methods. This is now pointed out in the new version.
Minor Comments
2. We changed the claim from “the biggest challenge” to “an important challenge” and provided references.
3. changed as suggested
4. changed as suggested
Anonymous 5e7a
-------------------------
We agree that a comparison to previous work on convolutional/shift-invariant sparse coding is necessary and hope that the changes made as described above make the similarities and differences to between SISC and JADL sufficiently clear. We found that most papers on SISC do not address the problem of the dictionary update but only focus on sparse coding. An exception was [3], from which it becomes clear that for SISC the dictionary update is a non-trivial problem with increased computational complexity.
Anonymous 8b9c
-------------------------
See comment above for comparison to SISC.
In fact, the LFP data does not contain much more structure than the spikes. Hence, the learned dictionaries look quite redundant and its analysis provides only limited insight in the data. However, we think that the visualization of the code reveals that although the dictionary looks redundant, important differences have been picked up, leading to contiguous sets of epochs dominated by the same atom. |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | zo2FGvCYFkoR4 | review | 1,362,402,300,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8b9c"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: The paper proposes a method for learning shiftable dictionary elements - i.e., each dictionary is allowed to shift to its optimal position to model structure in a signal. Results on test data show a significant improvement over regular sparse coding dictionary learning for recovering structure in data, and results on LFP data provide a more interpretable result.
This seems like a sensible approach and the results are pretty convincing. The choice of data seems a bit odd - all the LFP waveforms look the same, perhaps it would be worthwhile to expand the waveform so we can see more structure than just a spike.
This approach could be easily confused for a convolution model. The difference here is that the coefficients are mutually exclusive over shift. the authors may want to point out the similarities and differences to a convolution sparse coding model for the reader. |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | HrUgwafkmVrpB | review | 1,363,533,480,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | 9CrL9uhDy_qlF | review | 1,363,533,540,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | DJA5lKoL8-lLY | review | 1,362,362,340,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8ed7"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: This paper introduces a dictionary learning technique that incorporates time delays or shifts on the learned dictionary, called JADL, to better account for this structure in multi-trial neuroelectric signals. The algorithm uses the previous dictionary learning framework and non-convex optimization, but adds a selection step over possible shifts for each atom (for each point), framed as an l-0 optimization. This objective is the main contribution of the paper, which enables better performance for time-delayed data as well as potentially useful temporal structure to be extracted from the data.
The paper introduces a novel objective for addressing the time shift problem (e.g in M/EEG data), but frames a typical coordinate descent approach for solving the resulting non-convex problem. The main difference in the optimization is (1) ensuring that the coefficients, a, for the dictionary, D, block-wise satisfy the l-0 constraint by disabling updates to all but one coefficient within a block and (2) modifying the gradient update in block coordinate descent on the dictionary, D, which now has a shift operator around D. Taking this obvious solution route leads to a non-convex optimization and potentially lengthy computation (as the delta set size increases). The quality of writing and experiments is high.
Pros
1. The proposed JADL algorithm facilitates application of dictionary learning techniques to M/EEG data, which is an important application. Moreover, as a secondary benefit, it allows time-delay structure to be learned.
2. The writing is mostly clear and the paper is well organized.
3. Experimental results are comprehensive and include important details of their experimental procedure.
Cons
1. The computational requirements of this algorithm are not explored, though the larger dictionary in JADL (due to the addition of delta shifts) could significantly slow learning.
2. For clarity: (a) Include examples of shifts, Delta, in the problem statement (such as the ones used in the experiments). (b) Include examples of the types of data that could benefit from this framework, to better justify the importance of framing the problem with time-shifts and better explain what is meant by 'features [being] well-aligned across signals'. (c ) The explanation of how to enforce constraint (7) should be improved, e.g. 'block all other coefficients a_j^{S,i}' should probably be 'block all other coefficients in segment a_j^{S,i}', but the meaning is actually significantly different and this was quite confusing.
3. The comment that the parameter, lambda, is no longer important because sparsity is induced by the constraint in (7) suggests that as the size of the set of delta increases, this problem formulation no longer learns a sparse solution over the chosen dictionary. I would suggest that this is not the case, but rather that the datasets used in this paper had a small dictionary and did not require the coefficients to be sparse. The constraint in (7) simply ensures that only one delta is chosen per atom, but does not guarantee that the final solution over the delta-shifted dictionary will be sparse. Therefore, if the number of atoms is large, the regularizer || a_j ||_1 should still be important. It is true that constraint (7) ensures the solution is sparse over all possible delta-shifted dictionaries; this is however an unfair comparison to other dictionary learning techniques which have a much smaller dictionary space to weight over.
4. Con (3) suggests that learning with a very large dictionary (the size of the delta-shifted dictionary set) and setting the lambda parameter large might have more comparable performance to the algorithm suggested in this paper and should be included. Of course, this highly regularized approach on a large dictionary would not explicitly provide the time shift structure in the data as does JADL, but would be an interesting and more fair comparison. However, if the time-shift structure is not actually useful (and is simply used to improve learning), then DL with a large dictionary and large regularization parameter, lambda, could be all that is needed to deal with this problem for EEG data. The authors should clarify this difference and contribution more clearly.
Minor Comments:
1. For a reference on convex solution to the dictionary learning problem, see 'Convex sparse matrix factorizations', F. Bach, J. Mairal and J. Ponce. 2008; and 'Convex Sparse Coding, Subspace Learning and Semi-Supervised Extensions', X. Zhang, Y. Yu, M. White, R. Huang and D. Schuurmans. 2011.
2. There should be citations for the claim: 'This issue is currently the biggest challenge
in M/EEG multi-trial analysis.'
3. Bottom of page 3: 'which allows to solve it' -> 'which allows us to solve it'
4. Page 5: 'property allows to' -> 'property allows us to' |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | NjApJLTlfWxlo | review | 1,362,376,680,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 5e7a"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: This paper introduces a sparse coding variant called 'jitter-adaptive' sparse coding, aimed at improving the efficiency of sparse coding by augmenting a dictionary with temporally shifted elements. The motivating use case is EEG data, where neural activity can arise at any time, in atoms that span multiple recording channels. Ideally these motifs would be recognized as the dictionary components by a dictionary-learning algorithm.
EEG data has been analyzed with sparse coding before, as noted by the authors, and the focus of this paper is the use of jitter-adaptive dictionary learning to achieve a more useful signal decomposition. The use of jitter adaptive dictionary learning is indeed an intuitive and effective strategy for recovering the atoms of synthetic and actual data.
One weakness of this paper is that the technique of augmenting a dictionary by a time-shifting operator is not entirely novel, and the authors should compare and contrast their approach with e.g.:
- Continuous Basis Pursuit
- Deconvolutional Networks- Charles Cadieu's PhD work
- The Statistical Inefficiency of Sparse Coding for Images (http://arxiv.org/abs/1109.6638)
Pro(s)
- jitter-adaptive learning is an effective strategy for applying sparse coding
to temporal data, particularly EEG
Con(s)
- paper would benefit from clarification of contribution relative to previous work |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data. | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | DdhjdI7FMGDFT | review | 1,363,533,480,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
MQm0HKx20L7iN | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering | [
"Boyi Xie",
"Shuheng Zheng"
] | Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation time. Additionally, we take advantage of certain labeled data points via distance metric learning to achieve a competitive precision and recall comparing to K-Means but in much less computation time. | [
"hashing",
"agglomerative",
"computational burdens",
"novel scheme",
"exact",
"distance calculation",
"distance",
"klsh",
"values"
] | https://openreview.net/pdf?id=MQm0HKx20L7iN | https://openreview.net/forum?id=MQm0HKx20L7iN | vpc3vyRo-2AFM | review | 1,362,080,280,000 | MQm0HKx20L7iN | [
"everyone"
] | [
"anonymous reviewer c8d7"
] | ICLR.cc/2013/conference | 2013 | title: review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering
review: This paper proposes to use kernelized locality-sensitive hashing (KLSH), based on a similarity metric learned from labeled data, to accelerate agglomerative (hierarchical) clustering. Agglomerative clustering requires, at each iteration, to find the pair of closest clusters. The idea behind this paper is that KLSH can be used to accelerate the search for these pairs of clusters. Also, the use of a learned, supervised metric should encourage the extraction of a clustering that reflects the class structure of the data distribution, even if computed from a relatively small subset of labeled data. Comparisons with k-means, k-means with distance learning, agglomerative clustering with KLSH and agglomerative clustering with KLSH and distance learning are reported.
Unfortunately, I find this paper to be quite incremental. It essentially corresponds to a straightforward combination of 3 ideas 1) agglomerative clustering, 2) kernelized LSH [7] and 3) supervised metric learning [5].
I find that details about the approach are also missing, about how to combine KLSH with agglomerative clustering. First, the authors do not explain how KLSH is leveraged to find the pair of closest clusters C_i and C_j. Are they iterating over each cluster C_i, finding its closest neighbour C_j using KLSH? This would correspond to a complexity linear in the number of clusters, and thus initially linear in the number of data points N. In a large scale setting, isn't this still too expensive? Are the authors doing something more clever? Algorithm 1 also mentions a proximity matrix P. Isn't its size N^2 initially? Again, in a large scale setting, it would be impossible to store such a matrix. The authors also do not specify how to compute distances between two clusters consisting in more than a single data point. I believe this is sometimes referred to as the linkage distance or criteria between clusters, which can be the min distance over all pairs or max distance over all pairs. What did the authors use, and how does KLSH allow for an efficient computation?
Moreover, I'm not convinced of the benefit of the algorithm, based on the experiments reported in table 1. Indeed, agglomerative clustering with KLSH and distance learning does not dominate all other algorithm for both the precision and recall. In fact, it's doing terribly in terms of recall, compared to k-means. Also, it is not exactly clear to me what precision and recall correspond to in the context of this clustering experiment. I would suggest the author explicitly define what they mean here. I'm more familiar with adjusted rand index, as an evaluation metric for clustering...
Finally, the writing of the paper is quite poor. I already mentioned that many details are lacking. Moreover, the paper is filled with typos and oddly phrased sentences. |
MQm0HKx20L7iN | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering | [
"Boyi Xie",
"Shuheng Zheng"
] | Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation time. Additionally, we take advantage of certain labeled data points via distance metric learning to achieve a competitive precision and recall comparing to K-Means but in much less computation time. | [
"hashing",
"agglomerative",
"computational burdens",
"novel scheme",
"exact",
"distance calculation",
"distance",
"klsh",
"values"
] | https://openreview.net/pdf?id=MQm0HKx20L7iN | https://openreview.net/forum?id=MQm0HKx20L7iN | Z9bz9yXn_F9nA | review | 1,362,172,860,000 | MQm0HKx20L7iN | [
"everyone"
] | [
"anonymous reviewer cce9"
] | ICLR.cc/2013/conference | 2013 | title: review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering
review: This workshop submission proposes a method for clustering data which applies a semi-supervised distance metric to the data prior to applying kernelized locality-sensitive hashing for agglom
erative clustering. The intuition is that distance learning on a subset of data pairs will improve overall performance, and that the LSH-based clustering will be a better match for high-dimension data than k
-means. The method is evaluated on MNIST data.
There is little to no innovation in this paper, and, considering that there is no learned representation to speak of, it is of little interest for ICLR. The authors do not adequately explain the approach, an
d the experimental evaluation is unclear. The semi-supervised distance metric learning is not discussed fully, and the number and distribution of labeled data is not given.
Moreover, the results are not promising. Although it is difficult to compare raw precision/recall numbers (F-measure or other metrics would be preferable), it is clear that the proposed method has much lower
recall than the kmeans baseline, with only moderate improvement in precision. The submission would also be improved by a visualization of the clustering obtained with the different methods. |
iKeAKFLmxoim3 | Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | [
"Ognjen Rudovic",
"Maja Pantic",
"Vladimir Pavlovic"
] | We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ranks, that impose temporal ordinal constraints on the static ranks (i.e., intensity scores). Our experimental results show that the proposed approach outperforms state-of-the art methods for sequence classification with ordinal data and other ordinal regression models. The approach performs significantly better than other models in terms of Intra-Class Correlation measure, which is the most accepted evaluation measure in the tasks of facial behaviour intensity estimation. | [
"pain intensity estimation",
"facial images",
"framework",
"intensity scores",
"novel",
"kcorf"
] | https://openreview.net/pdf?id=iKeAKFLmxoim3 | https://openreview.net/forum?id=iKeAKFLmxoim3 | VTEO8hp3ad83Q | review | 1,362,297,780,000 | iKeAKFLmxoim3 | [
"everyone"
] | [
"anonymous reviewer 9402"
] | ICLR.cc/2013/conference | 2013 | title: review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images
review: This extended abstract discusses a modification to an existing ordinal conditional random field model (CORF) so as to treat non-stationary data. This is done by making the variance in a probit model depend on the observations (x) and appealing to results on kernels methods for CRFs by Lafferty et al. The authors also introduce what they call dynamic ranks, but it is impossible to understand, from this write-up, how these relate to the model. No intuition is provide either. What is the equal sign doing in the definition of dynamic ranks?
Regarding section 2, it is too short and impossible to follow. The authors should rewrite it making sure the mathematical models are specified properly and in full mathematical detail. All the variables and symbols should be defined. I know there are space constraints, but I also believe a better presentation is possible.
The experiments claim great improvements over techniques that do not exploit structure or techniques that exploit structure but which are not suitable for ordinal regression. As it is, it would be impossible to reproduce the results in this abstract. However, it seems that great effort was put into the empirical part of the work.
Some typos:
Abstract: Add space after labels. Also, a novel should be simply novel.
Introduction: in the recent should be in recent. Also, drop the a in a novel dynamic features.
Section 2: Add space after McCullagh. What is standard CRF form? Please be precise as there are many ways of parameterizing and structuring CRFs.
References: laplacian should be Laplacian |
iKeAKFLmxoim3 | Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | [
"Ognjen Rudovic",
"Maja Pantic",
"Vladimir Pavlovic"
] | We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ranks, that impose temporal ordinal constraints on the static ranks (i.e., intensity scores). Our experimental results show that the proposed approach outperforms state-of-the art methods for sequence classification with ordinal data and other ordinal regression models. The approach performs significantly better than other models in terms of Intra-Class Correlation measure, which is the most accepted evaluation measure in the tasks of facial behaviour intensity estimation. | [
"pain intensity estimation",
"facial images",
"framework",
"intensity scores",
"novel",
"kcorf"
] | https://openreview.net/pdf?id=iKeAKFLmxoim3 | https://openreview.net/forum?id=iKeAKFLmxoim3 | lBM7_cfUaYlP1 | review | 1,362,186,300,000 | iKeAKFLmxoim3 | [
"everyone"
] | [
"anonymous reviewer 0342"
] | ICLR.cc/2013/conference | 2013 | title: review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images
review: This paper seeks to estimate ordinal labels of pain intensity from
videos of faces. The paper discusses a new variation of a conditional
random field in which the produced labels are ordinal values. The
paper's main claim to novelty is the idea of 'dynamic ranks', but it
is unclear what these are.
This paper does not convey its ideas clearly. It is not immediately
obvious why an ordinal regression problem demands a CRF, much less a
kernelized heteroscedastic CRF. Since I assume that each frame has a
single label, is the function of the CRF simply to impose temporal
smoothness constraints? I don't understand the motivation for the
additional aspects of this. The idea of 'dynamic ranks' is not
explained, beyond Equation (2), which is itself confusing. For
example, what does the equal sign inside the parentheses mean on the
left side of Equation 2? It took me quite a while of looking at the
right-hand side of this equation to realize that it was defining a
set, but I don't understand how this relates to ranking or dynamics.
Section 3 seems to imply that the kernel is between the features of
6x6 patches, but this doesn't make sense to me if the objective is to
have temporal smoothing.
I found this paper very confusing. It does not provide many details
or intuition. In trying to resolve this confusion, I examined the
authors' previous work, cited as [14] and [15]. These other papers
appear to contain most of the crucial details and assumptions that sit
behind the present paper. I appreciate that this is a very short
paper, but for it to be a useful contribution it must be at least
somewhat self contained. As it stands, I do not feel this is
achieved. |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images. | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | FBx7CpGZiEA32 | review | 1,362,287,940,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"anonymous reviewer 1e7c"
] | ICLR.cc/2013/conference | 2013 | title: review of Regularized Discriminant Embedding for Visual Descriptor Learning
review: The paper aims to present a method for discriminant analysis for image
descriptors. The formulation splits a given dataset of labeled images
into 4 categories, Relevant/Irrelevant and Near/Far pairs
(RN,RF,IN,IF). The final form of the objective aims to maximize the
ratio of sum of distances of irrelevant pairs divided by relevant pairs. The distance metric is calculated at the lower dimensional projected space. The
main contribution of this work as suggested in the paper is selecting
the weighting of 4 splits differently from previous work.
The main intuition or reasoning behind this choice is not given,
neither any conclusive emprical evidence. In the only experiment that
contains real images in the paper, data is said to be taken from
Flickr. However, it is not clear if this is a publicly available
dataset or some random images that authors collected. Moreover, for
this experiment, one of the only two relevant methods are not included
for comparison. Neither, any details of the training procedure nor the actual hyper parameters (eta) are explained in the paper. |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images. | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | -7pc74mqcO-Mr | review | 1,362,186,780,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"anonymous reviewer 39f1"
] | ICLR.cc/2013/conference | 2013 | title: review of Regularized Discriminant Embedding for Visual Descriptor Learning
review: This paper describes a method for learning visual feature descriptors that are invariant to changes in illumination, viewpoint, and image quality. The method can be used for multi-view matching and alignment, or for robust image retrieval. The method computes a regularized linear projection of SIFT feature descriptors to optimize a weighted similarity measure. The method is applied to matching and non-matching patches from Flickr images. The primary contribution of this workshop submission is to demonstrate that a coarse weighting of the data samples according to the disparity between their semantic distance and their Euclidean distance in SIFT descriptor space.
The novelty of the paper is minimal, and most details of the method and the validation are not given. The authors focus on the weighting of the sample pairs to emphasize both the furthest similar pairs and the closest dissimilar pairs, but it is not clear that this is provides a substantial gain. |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images. | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | Xf5Pf5SWhtEYT | review | 1,363,779,180,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi"
] | ICLR.cc/2013/conference | 2013 | review: We sincerely appreciate all the reviewers for their time and comments to this manuscript.
We fully agree that it is really hard to find maningful contributions from this short paper, while we tried our best to emphasize them. As we have noted, the full version of this manuscript is currently under review in an international journal. In order to avoid violating the dual-submission policy of the journal, we could not include most of the details and empirical results - only the main idea and some simple examples could be remained in this workshop track submission.
We promise that all the details omitted in this version will be presented clearly in the workshop, e.g., the choice of the weighting of each split, the training dataset used in our experiments, and conclusive empirical comparisons.
For example, we compared the image retrieval performance for landmark buildings in Oxford (http://www.robots.ox.ac.uk/~vgg/data/oxbuildings/) and Paris (http://www.robots.ox.ac.uk/~vgg/data/parisbuildings/). A nonlinear variant of LFDA implemented using deep belief networks (DBN) and a kernelized version of LDE (KDE) were compared to our method. In terms of the mean average precision (mAP) score, we observed significant improvements using our method (mAP: 0.678 on Oxford / 0.700 on Paris) over raw SIFT (0.611 / 0.649), KDE (0.656 / 0.673), DBN (0.662 / 0.678), under the same number of the learned features and the same size of visual vocabulary.
Thanks to all the reviewers again. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 0sZLsSijYosjR | review | 1,360,886,580,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen"
] | ICLR.cc/2013/conference | 2013 | review: Hello dear reviewer,
Thank you for your well thought out review. We hope to have a draft which addresses some of your comments shortly.
Regards |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | KcIrcVwbnRc0P | review | 1,362,189,720,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: On the topic of comparing word representations, quality as a function of word frequency is something I've often found to be a problem. For example, rare words are often important for sentiment analysis, but many word representation learners produce poor representations for all but the top 1000 or so most frequent words. As this paper is focused on comparing representations, I think adding an experiment to assess quality of less common words would tremendously help the community understand the tradeoffs between word representation methods. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | -82Lr-SgHKmgJ | review | 1,360,855,200,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer 406c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: The paper proposes a method for evaluating real-valued vector embeddings of words based on several word and word-pair classification tasks. Though evaluation of such embeddings is an interesting and important problem, the experimental setup used it virtually impossible to draw any interesting conclusions.
Some of the proposed evaluation tasks are a considerably less interesting than others. The Sentiment Polarity, for example, is certainly interesting and practically relevant, while the Regional Spellings task seems artificial. Moreover, performance on the latter is likely to be very sensitive to the regional distribution in the corpus to learn the embeddings. While identifying synonyms and and antonyms is an interesting problem, the formulation of the Synonyms and Antonyms task is too artificial. Instead of classifying a word pair as synonyms or antonyms or it would be far more interesting to perform three-way classification of such pairs into synonyms, antonyms, and neither. Note that there is little reason to think that embeddings learned by neural language models will capture the difference between antonyms and synonyms well because replacing a word with its antonym or synonym often has little effect on the probability of the sentence.
The experimental evaluation of the embeddings is unfortunately almost completely uninformative due to several confounding factors. The models used to produce the embeddings were trained on different datasets, with different vocabularies, context sizes, and number of passes through the data. Without controlling for these it is impossible to know the real reasons behind the differences in performance of the embeddings. All that can be concluded from the results in the paper is that some of the publicly available embeddings perform better than others on the proposed tasks. However, without controlling for the above factors, claims like 'Our work illustrates that significant differences in the information captured by each technique exist.' are unjustified.
The results obtained by reducing the amount of information in the embeddings are more informative. The fact that quantizing the real values in the embeddings does not drastically affect the classification performance is quite interesting. However, to make this result more convincing the authors need to control for the differences in the variance of the embeddings resulting from quantization. These differences are problematic because, as Turian at al. [14] showed, scaling embeddings by a constant can have a significant effect on classifier performance.
The results obtained using PCA to reduce the representation dimensionality are hard to interpret because the paper does not report the numbers for the linear and non-linear classifiers separately. This is a problem because reducing the input dimensionality has a much more drastic effect on the capacity of linear classifiers. Thus it is entirely possible that though the relevant information is stilled contained in the projected embeddings, the linear classifiers simply cannot take advantage of it.
While the authors mention 4-fold cross validation and a development set, it is unclear whether the set was one of the folds. Does it mean that two folds were used for training, one for validation, and one for testing?
It is also unclear which method was used to produce Figure 3(b).
The probabilities in Table 2 are given in percent but the caption does not
state that. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 224E22nDWH2Ia | review | 1,362,170,040,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer af94"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: The submission considers 3 types of publicly-available distributed representations of words: produced by SENNA (Collobert and Weston, 11), the hierarchical bilinear language model (Mnih and Hinton, 2007) and Turian et al's (2010) implementation of the SENNA method. They compare performance of classifiers using the embeddings on 5 different tasks (e.g., sentiment polarity, noun gender).
In a way, this submission is similar to the work of Turian et al (2010) where different types of word representations are compared, however, here the authors just use available representations rather than induce their own. Consequently, they cannot shed the light on which factors affect the resulting performance the most (e.g., data size, loss used, regularization regime). Consequently, the paper may be useful when deciding which representations to download, but it does not provide sufficiently interesting insights on which methods are preferable.
The discussion of pair classification seems somewhat misleading. The authors attribute improved results on classifying pairs (e.g., where the first word or the second name in a given pair is masculine) w.r.t. classifying words (e.g., whether a name is masculine or feminine) to the lack of linear separability, whereas it is obvious that pairwise classification is always easier. Basically, as we know that there is a single word of each class in a pair, it is an ensemble prediction (one classifier using 1st word, another - the second one). So, I am not really sure what this result is actually telling us.
I am also not sure how interesting the truncation experiments are. I would be much more interesting to see which dimensionality of the representation is needed, but mostly which initial representations, as it affects training performance (at least linearly). However, again, this is not really possible without retraining the models.
Pros:
- I believe that a high-quality comparison of existing methods for inducing word representations would be an important contribution.
Cons:
- The paper compares downloaded representations rather than the methods. It does not answer the question which method is better (for each task)
- Some details of the experimental set-up are a little unclear. E.g., the paper mentions that logistic regression, an SVM with the linear kernel, and an SVM with the RBF kernel are used. However, it does not clarify which classifier was used and where. Were classifiers also chosen with cross validation?
- Some of the tasks (e.g., choosing British vs. American spelling) could benefit from using methods exploiting wider document context (rather than ngrams). There have been some methods for incorporating this information (see, e.g., Huang et al, 2012). It would be interesting to have such methods in the list.
Minor:
- abstract: 'our evaluation shows that embeddings … capture deep semantics', I am not sure what the authors mean by 'deep' semantics. However, I doubt that any of the considered tasks could be considered as such. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | QrngQQuNMcQNZ | review | 1,362,416,940,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer 24e2"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: This paper compares three available word vector embeddings on several tasks.
The paper lacks somewhat in novelty since the vectors are simply downloaded. This also makes their comparison somewhat harder since the final result is largely dependent on the training corpora.
A comparison to the vectors of Huang et al 2012 would be interesting since they are very related.
It would be somewhat more interesting if the methods had been trained on the same dataset and on harder or real tasks such as NER (as done by Turian).
For a more semantic evaluation the datasets of WordSim353 or Huang et al could be used to compare to human judgments.
It would be very interesting to find if some of the dimensions are well correlated with some of the labels of the supervised tasks that are considered. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | ggT4SGBq4iS57 | review | 1,362,457,800,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena"
] | ICLR.cc/2013/conference | 2013 | review: We thank the anonymous reviewers for their thoughtful comments. We have taken them into consideration, and have uploaded a revised manuscript to arXiv over the weekend. (it should be available in a few hours)
Specific changes include:
1. We have evaluated 3-class versions of our classifiers on the sentiment and synonym/antonym tasks.
2. We have reworked to focus more explicitly on term vs. pair tasks, and believe that this is a more clear presentation of our ideas
3. We have illustrated the convergence of linear vs. nonlinear classifiers as dimensions are reduced by PCA.
4. We have tried to modify specific language and tone that the reviewers found objectionable.
We have some specific comments to each of our reviewers:
Anonymous 406c, (1st reviewer, 2/14/2013)
- Scaling of embeddings:
We investigated scaling the embeddings to control the variance after PCA as recommended by Turian (2010). Results did not significantly change, and so we left the original in there. We have posted the corresponding plots with scaling:
(by embedding) http://goo.gl/wpXmD
(by task) http://goo.gl/hWYkX
You might also be interested in the fact that we ran all of our experiments on scaled and unscaled versions of the embeddings, but did not notice significant differences between them. We attribute both these results to the fact that we only used embeddings as features - Turian (2010) comments that his scaling approach is for mixing embedding features with words represented by binary features.
Anonymous af94 (2nd reviewer, 3/1/2013)
- Similar to Turian (2010)
We’re quite different actually - Turian (2010) studied enhancing existing NLP tools with a variety of embeddings. This means that he combined the embeddings with existing features (from words, n-grams, or characters).
Instead, we use the embeddings as the sole features to understand their quality on their own. Moreover, we propose term/pair classification tasks to isolate the effect of context that influence the results of sequence tagging tasks.
- Pairwise classification is easier because its an ensemble
You raise an interesting point here. To explain our views: in our initial experiments we used the element-wise subtraction between two embeddings as features and they outperformed the single word version of the experiment. This seems to indicate that the embeddings encode information in the direction of the vector between two points in the space. We later modified the experiment to the one we report (it seemed more general at the time).
- Classifiers? Which? Where?
We use Linear SVM, Logistic Regression, RBF Kernel SVM on all our tasks and report the geometric mean of their results for each task. Each classifier result is obtained with 4-fold cross validation setup.
We also have some general comments on the nature of our tasks, and the decision to evaluate existing embeddings instead of training new ones
On the nature of our tasks:
Most of the previous evaluation on word embeddings has been done in the context of sequence tagging problems. While practical, we believe that this approach complicates the actual analysis of the features learned by neural language models.
For example, in a typical part of speech tagging setup the performance of the tagger over out-of-vocabulary words (without using character features) is much higher than random and might reach 70-80% accuracy. The decision, here, is clearly induced by the context. The influence of the context on the performance of classification, makes it harder to estimate the intrinsic quality of the word embeddings.
We agree that not all of our tasks map directly to traditional NLP tasks, but this is intended - each task illustrates one type of interesting behavior that can be found in the embedding space. Some behaviors are ones known to exist (e.g. plurality is a sub-component of Part-of-Speech tagging), one seems practical (sentiment), and one is just cool (e.g. synonym / antonym). The list is certainly not comprehensive, and we would appreciate additional suggestions.
On not training our own embeddings:
We are interested in NLP applications of feature learning, and we believe our results are valuable to consumers of such technology. We strove to evaluate the quality of what is available other researchers would use in their work.
We do agree it is hard to compare features produced under different conditions. It is a matter of fact that some of the embeddings will be better than others. The differences could be attributed to training specific factors (e.g. training time and datasets), or to the technique itself. In light of this difficulty we have tried to highlight that we are comparing the embeddings themselves, and not the techniques. We have toned down language contrary to this message.
On a final note, we would like to again thank our anonymous reviewers, and the greater ICLR community. |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results. | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 7AoBA7CD4T7Fu | review | 1,363,573,560,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena"
] | ICLR.cc/2013/conference | 2013 | review: We thank the anonymous reviewers for the reference to Huang et al (2012). We have added the embeddings generated by Huang to our comparison, and we believe that they are an interesting addition.
The latest version of our submission can be found on arxiv. |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | bbQXGy3KgUrcP | comment | 1,363,649,700,000 | zzS1zF0bHj6V7 | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. Here are some specific replies. (>-mark indicates quote from review)
> * The two macaque subjects in the study by Majaj et al (2012) are unlikely to have been exposed to images of 3 object categories in the dataset: cars, planes or other animals such as cows and elephants. They may have been exposed to images from the 4 remaining object classes: faces, chairs, tables and fruits. By consequence, their V4 or IT cortical areas might not be trained to recognize, even after prolonged exposure, that the image of a car at an angle is still a car with a variation, and not another type of objects. The authors do raise the question whether the neural representation could be enhanced with increased exposure.
Some additional information, not included in the paper:
* Our data suggest that during the passive viewing paradigm there is no change in classifier performance trained on the early part of the recording vs. a later part of the recording. So we see no exposure dependent classifier improvement through time.
* When examining per-category classifier performance, there is no obvious pattern between the two sets of categories you point out (cars/planes/animals vs. faces/chairs/tables/fruits).
* The absolute performance of classifiers trained for Cars, or Planes or Animals does not seem to be significantly different from classifiers trained on the other categories.
It remains an interesting question how the neural representational performance would change through training the animal to make the desired categorizations.
> * The paper does mention that only about a hundred sites, on the cortex surface, are selected for the image categorization task, compared to all the tens of thousands of hidden units in the deep architecture. Some further discussion on the fairness of such a comparison would be welcome.
One important point is that the measure we have chosen, by measuring accuracy against complexity, allows us to compare representations of different dimensionality. How a representation is affected by subsampling depends on the properties of that representation, and it appears that the neural representation is quite robust to such subsampling. For example, we have attempted to estimate the convergence of our measure as we increase the number of recording sites, from within our sample. It has been somewhat surprising to us that this curve appears to asymptote so quickly, but of course this may be due to a sampling bias in the procedure.
There are a number of factors that may bias the neural results related to sampling such a small number of sites from the cortex. Here is a short discussion of some of these factors:
* Neurons that are close together in cortical space are typically correlated. This indicates that the number of relevant dimensions is far less than the total number of neurons in cortex. This is in-line with the fast convergence we observe of our measurement with increasing the number of sites.
* The placement of the grids, and the spacing between electrodes in the grid may affect our measurement.
* We examine multi-unit activity, instead of individual neurons. At the least this indicates that we are recording from more neurons than the number of sites. We estimate that the number of total neurons we are recording from is about 5x times the number of multi-units (estimated using the spike count ratio between multi-units and single-units collected in V4 and IT in our lab). It is not clear how single units would change our result, if at all.
* A point that may not be obvious is that an inherit property of electrophysiology is that we are “blind” to the neurons that do not fire during our experimental procedure. Therefore, we may be introducing a bias by recording from only active neurons and “discarding” neurons that are not active. This would also introduce an underestimate to the number of potential neurons we are effectively recording. Note that including such silent neurons would not affect our kernel analysis measure, just the estimate of the total number of neurons we recorded.
* We have a hardware limitation that limits us to recording 128 sites at a time. For a given animal, we chose the top-128 best visually driven sites. “Visual drivenness” was measured with a separate pilot image set (see Rust and DiCarlo 2012 and Chou et al.). Roughly this measure is the mean across the top 10% of absolute per-image d-primes between an image and blank. The top 10% is cross-validated and the absolute value is necessary to account for inhibitory sites. This sampling bias may affect our measure by discarding neural activity not relevant for the task, thus increasing or KA-AUC estimate of the neural representation.
One final point, at this technological point in time, we are only able to record from 128 multi-unit sites simultaneously. We achieve the total number of sites through multiple recording sessions and multiple animals. Given these limitations, this dataset is cutting-edge in terms of the number of sites, the number of images presented, and the number of repetitions of each image, especially for IT cortex recordings.
> The Gaussian kernel uses a single coefficient sigma for all the features (i.e., all the neurons / hidden units). On one hand, the neural data are taken on the visual cortex areas V4 and IT, where all the electrode sites are expected to measure information that is relevant for image recognition tasks in general, and the deep learning architectures were all trained on image classification tasks. On the other hand, not all the features (hidden units or electrode sites) are equally relevant, all the time, to all these tasks, but their values are all scaled nevertheless. Would it make sense to tune the individual per-feature sigma coefficients in the Gaussian kernel, as in Chapelle et al (2002) 'Choosing multiple parameters for support vector machines'?
Under our proposed methodology, modifying the representation, even by rescaling dimensions, during test time is not allowed. It would be reasonable to take a representation, apply the method in Chapelle et al (2002) on the training set, and thus create a new representation to be used during testing. This sounds like a good idea, and we are interested to see what else the community comes up with!
> Are all the 5 references by Pinto et al. necessary for this paper?
Most are, but we will remove the Cosyne 2010 abstract and the FG 2011 paper in the next revision, as these points are covered by the remaining references.
> The authors do not indicate how the images from the dataset were split among the two monkeys (were they shown the same images, or two, different, random sets of images?) and how the neural observations from the different electrode sites (58 IT and 70 V4 sites on one monkey, 110 IT and 58 V4 sites on the other monkey) were grouped. My guess is that the same sets of images were shown to the two monkeys and that their responses were concatenated into IT or V4 matrices of site vs image.
You are correct. The same sets of images (all of them) were shown to each of the monkeys. The sites from each monkey IT cortex were concatenated, as were the sites from each monkey V4 cortex. We will update the text to clarify.
> The authors do not need to mention the low computational complexity of the LSE loss (section 2.2). It is not more complex than the logistic loss and the real point is what they say about intra-class variance and inter-class variance.
Thanks for the feedback, we will update the text.
> I do not fully understand the protocol in section 2.3, namely: 'we evaluate 10 pre-defined subsets of images, each taking 80% of the data from each variation level'.
We have updated the text to clarify.:
For each variation level, we compute the kernel analysis curve and KA-AUC ten times, each time sampling 80% of the images with replacement. The ten samples for each variation level are fixed for all representations.
> Is total dimensionality D equal to the number of samples n?
Yes. We indicate this now in the text.
Will update arXiv posting shortly. |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | zzS1zF0bHj6V7 | review | 1,362,225,300,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer 4738"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: This paper applies the methodology for 'kernel analysis of deep networks' (Montavon et al, 2011) to the neural code measured on two areas (V4 and IT) on the visual cortex of the macaque. It compares, on the same test set, the biological responses of V4 or IT (spike counts measured at about 100 electrode sites) to the hidden unit activations on the penultimate layer of several state-of-the-art deep learning architectures trained on large image datasets: the 10 million YouTube images and deep sparse auto-encoder paper by Le et al (2012), a convolutional network by Krizhevsky et al (2012), two papers by Pinto et al, one on the V1 model, another on the high throughput L3 model class and the unsupervised learning paper by Coates et al (2012).
The authors show that the IT area of the visual cortex seems to have a neural code that is more discriminative than the neural code of the V4 area for a 7-class image categorization task under variations of pose, position and scale. The authors also show that one supervised deep learning algorithm (Krizhevsky et al, 2012) even produces hidden layer representation that seems to outperform IT on that task.
Pros, novelty and quality:
This paper is the first to apply the same method for evaluating feature representations of both the biological neural code (measured on the visual cortex of a primate) and of hidden unit activations in state-of-the-art methods for image classification. It provides an extensive comparison of the penultimate hidden layer of several deep learning algorithms, vs. the V4 and IT areas of the visual cortex of two macaques. As such, it provides insight into which algorithms make a good hidden representation of images.
The method for evaluating the feature representations is essentially non-parametric and provides a robust way to assess the complexity of the decision boundary. The kernel analysis method measures what percentage of the information coming from the sample images is required to successfully train a nonlinear Gaussian SVM-like classifier on the features (neural code or hidden unit activations), or a linear classifier in the dual space, for a simple image categorization task. The kernel PCA approach of keeping the top d eigenvectors of the kernel matrix in the dual solution is more robust than the cross-validation performance or than the number of support vectors, when the number of samples is small.
The paper is well written, the claims are well supported by the experiments. The metric used in this study is robust and the main results (IT vs V4, Krizhevsky et al 2012 vs IT on high variations) are statistically significant.
Cons:
There are no cons per se in this paper, only limitations in the methodology (linked to the choice of the dataset) that could be improved upon by using a more extensive dataset. Most of these limitations have been preemptively mentioned and discussed by the authors in section 4.
* The two macaque subjects in the study by Majaj et al (2012) are unlikely to have been exposed to images of 3 object categories in the dataset: cars, planes or other animals such as cows and elephants. They may have been exposed to images from the 4 remaining object classes: faces, chairs, tables and fruits. By consequence, their V4 or IT cortical areas might not be trained to recognize, even after prolonged exposure, that the image of a car at an angle is still a car with a variation, and not another type of objects. The authors do raise the question whether the neural representation could be enhanced with increased exposure.
* The paper does mention that only about a hundred sites, on the cortex surface, are selected for the image categorization task, compared to all the tens of thousands of hidden units in the deep architecture. Some further discussion on the fairness of such a comparison would be welcome.
Other comments:
* The Gaussian kernel uses a single coefficient sigma for all the features (i.e., all the neurons / hidden units). On one hand, the neural data are taken on the visual cortex areas V4 and IT, where all the electrode sites are expected to measure information that is relevant for image recognition tasks in general, and the deep learning architectures were all trained on image classification tasks. On the other hand, not all the features (hidden units or electrode sites) are equally relevant, all the time, to all these tasks, but their values are all scaled nevertheless. Would it make sense to tune the individual per-feature sigma coefficients in the Gaussian kernel, as in Chapelle et al (2002) 'Choosing multiple parameters for support vector machines'?
* Are all the 5 references by Pinto et al. necessary for this paper?
Minor comments:
* The authors do not indicate how the images from the dataset were split among the two monkeys (were they shown the same images, or two, different, random sets of images?) and how the neural observations from the different electrode sites (58 IT and 70 V4 sites on one monkey, 110 IT and 58 V4 sites on the other monkey) were grouped. My guess is that the same sets of images were shown to the two monkeys and that their responses were concatenated into IT or V4 matrices of site vs image.
* The authors do not need to mention the low computational complexity of the LSE loss (section 2.2). It is not more complex than the logistic loss and the real point is what they say about intra-class variance and inter-class variance.
* I do not fully understand the protocol in section 2.3, namely: 'we evaluate 10 pre-defined subsets of images, each taking 80% of the data from each variation level'.
* Is total dimensionality D equal to the number of samples n? |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | RRN_zPMIpEzTn | comment | 1,363,649,460,000 | fD8BKQYEClkvP | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. Here are some comments on your suggestions:
> The dataset used in the paper is composed of objects that are superposed to an independent background. While authors motivate their choice by controlling the factors of variations in the representation, it would be interesting to know whether machine learning or brain representations benefit most from this particular setting.
As you point out, we inevitably have to make trade-offs when designing our experiments. Your feedback on removing this controlled variation as an interesting question helps us to design future datasets for experiments.
> This paper also raises the important question of what is the best way of comparing representations. One can wonder, for example, whether the reduced set of kernels considered here (Gaussian kernels with multiple scales) introduces some bias in favor of 'Gaussian-friendly' representations.
We agree that exploring the effect of the kernel choice is an interesting direction. We hope to include this in future work (possibly a longer journal version). |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | fD8BKQYEClkvP | review | 1,362,156,600,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer b28a"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: The paper presents a benchmark for comparing representations of image data in brains and machines. The benchmark consists of looking at how the image categorization task is encoded in the leading kernel principal components of the representation, thus leading to an analysis of complexity and noise. The paper contains extensive experiments based on a representive set of state-of-the-art learning algorithms on the machine learning side, and real recordings of macaques brain activity on the neural side.
The research presented in this paper is well-conducted, timely and highly innovative. It is to my knowledge the first time, that representations obtained with state-of-the-art machine learning techniques for vision are systematically compared with real neural representations. The authors motivate the use of kernel analysis, by the inbuilt robustness to sample size being desirable in this heterogeneous setting.
The dataset used in the paper is composed of objects that are superposed to an independent background. While authors motivate their choice by controlling the factors of variations in the representation, it would be interesting to know whether machine learning or brain representations benefit most from this particular setting.
This paper also raises the important question of what is the best way of comparing representations. One can wonder, for example, whether the reduced set of kernels considered here (Gaussian kernels with multiple scales) introduces some bias in favor of 'Gaussian-friendly' representations. Also, as suggested by the authors, it could be that the way neural recordings are represented leads to underestimating their discriminative ability. |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | E6HmsiyOvphK_ | review | 1,362,226,860,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer d59c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: This paper assesses feature learning algorithms by comparing their performance on an object classification task to that of Macaque IT and V4 neurons. The work provides a new dataset of images, an analysis method for comparing feature representations based on kernel analysis, and neural feature vectors recorded from V4 and IT neurons in response to these images. The authors evaluate a number of recent representational learning algorithms, and identify a recent approach based on deep convolutional networks outperforms V4 and IT neurons.
The paper is the first of its kind in providing easy tools to evaluate new representations against high level neural visual representations. It's comparison method differs from prior work by investigating representational learning with respect to a task, and hence is less influenced by potentially task-irrelevant idiosyncrasies of the neural response. The final conclusion reached, that recent models are beginning to surpass V4 and IT models, is very interesting. The authors have clearly explained their rationale behind the many design choices required, and their choices seem very reasonable.
Because of the many design choices to be made in reducing neural data to a feature representation (the use of multi units rather than singular units, time averaging, short presentation times--many of which are discussed by the authors in the text), the resulting V4/IT performance is likely a lower bound on the true performance. To surpass a lower bound is good news, but to be a useful metric for future research efforts, this lower bound would should lie above current models' performance. The fact that the Krizhevsky model already outperforms V4/IT means there is less reason to compare future representation algorithms using the proposed metric in its current form.
The kernel analysis metric asks whether neural and artificial data can achieve similar classification performance for a given model complexity, but this is a separate question from asking whether the neural representation is similar to the artificial representation; e.g., for a classification task, one could imagine many different pairwise similarity structures that would remain linearly separable (or said with the standard metaphor, both a bird and a plane can fly, but rely on different mechanisms). While some aspects of the neural response may be task irrelevant, it may be complementary to augment the KA-AUC approach with a similarity-based approach. This could also be computed from the collected data and would help map levels within a computational model to visual brain areas. In general a more extensive discussion of and contrast with the Kriegeskorte approach would be helpful. |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible to directly test representational learning algorithms directly against the representations contained in neural systems. Here, we propose a new benchmark for visual representations on which we have directly tested the neural representation in multiple visual cortical areas in macaque (utilizing data from [Majaj et al., 2012]), and on which any computer vision algorithm that produces a feature space can be tested. The benchmark measures the effectiveness of the neural or machine representation by computing the classification loss on the ordered eigendecomposition of a kernel matrix [Montavon et al., 2011]. In our analysis we find that the neural representation in visual area IT is superior to visual area V4, indicating an increase in representational performance in higher levels of the cortical visual hierarchy. In our analysis of representational learning algorithms, we find that a number of current algorithms approach the representational performance of V4. Impressively, we find that a recent supervised algorithm [Krizhevsky et al., 2012] achieves performance equal to that of IT for an intermediate level of image variation difficulty, and performs between V4 and IT at a higher difficulty level. We believe this result represents a major milestone: it is the first learning algorithm we have found that produces a representation on par with IT on this task of intermediate difficulty. We hope that this benchmark will serve as an initial rallying point for further correspondence between representations derived in brains and machines. | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | g05Ygn6IJZ0iX | comment | 1,363,649,760,000 | E6HmsiyOvphK_ | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. (>-mark indicates quote from review)
> Because of the many design choices to be made in reducing neural data to a feature representation (the use of multi units rather than singular units, time averaging, short presentation times--many of which are discussed by the authors in the text), the resulting V4/IT performance is likely a lower bound on the true performance. To surpass a lower bound is good news, but to be a useful metric for future research efforts, this lower bound would should lie above current models' performance. The fact that the Krizhevsky model already outperforms V4/IT means there is less reason to compare future representation algorithms using the proposed metric in its current form.
These are good points. We did not know what to expect before we began measuring models and have been quite surprised by the performance of the Krizhevsky et al. model. Even given that this model surpasses IT, we still believe it is a relevant benchmark for algorithmic research. There are many interesting factors that go into the performance that will be worthwhile exploring, especially those related to efficiency (our opinion).
Furthermore, given the assumed “lower-bound” nature of the neural representation, we hope that this effort will encourage experimentalists to collect higher lower-bounds of the neural representation. Ideally, over time, we imagine a scenario similar to the progression in computer vision of increasingly challenging benchmarks of neural representation.
> The kernel analysis metric asks whether neural and artificial data can achieve similar classification performance for a given model complexity, but this is a separate question from asking whether the neural representation is similar to the artificial representation; e.g., for a classification task, one could imagine many different pairwise similarity structures that would remain linearly separable (or said with the standard metaphor, both a bird and a plane can fly, but rely on different mechanisms). While some aspects of the neural response may be task irrelevant, it may be complementary to augment the KA-AUC approach with a similarity-based approach. This could also be computed from the collected data and would help map levels within a computational model to visual brain areas. In general a more extensive discussion of and contrast with the Kriegeskorte approach would be helpful.
This is a very good point. We think matching neural and model representations at ever increasing levels of detail is an important pursuit. Generally, we consider a sort of “hierarchy of measures” of increasing specificity between neural responses and model responses. The one we have proposed here is relatively abstract, and task dependent by intention. The methods and approach of Kriegeskorte measures a, relatively, more constraining mapping between neural and model representations. As the current manuscript is longer than the conference organizers had hoped, we will reserve a more extensive discussion of the Kriegeskorte approach for a longer journal version of the manuscript. In ultimately choosing a measure, which level of abstraction one chooses to be satisfied with is largely dependent on one’s goals. |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks. | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | JNpPfPeAkDJqK | review | 1,363,672,020,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"simon bolivar"
] | ICLR.cc/2013/conference | 2013 | review: It has already been mentioned above, but I checked the longer version of the document posted at http://www.cc.gatech.edu/~lebanon/papers/lee_icml_2013.pdf
and there really is not enough discussion of the huge previous literature on locally low rank representations, going back at least as far back as
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1671801
and continuing with *many* recent works which represent data precisely as a weighted combination of local low rank matrices, for example, any of the papers on subspace clustering (especially if the model is restarted several times), or anchor graph embeddings of W. Liu et al, or the Locally Linear Coding of Yang et. al.
This does not even begin to touch the many manifold learning papers which explicitly model data via locally linear structures (which are necessarily locally low rank) and glue these together to get a parameterization of the data. |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks. | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | CkupCgw-sY1o7 | review | 1,363,319,940,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer"
] | ICLR.cc/2013/conference | 2013 | review: We appreciate for both of your reviews and questions.
- Kernel width: we validated the kernel width experimentally. Specifically, we examined the following kernel types: Gaussian, triangular, and Epanchnikov kernels. We also experimented with the kernel width (0.6, 0.7, 0.8). We found that sufficiently large width (>= 0.8) performs well, probably due to the fact that the similarity between users or items is typically small.
- Distance measure: as the second reviewer mentioned, we first factorize M using standard incomplete SVD. We then compute the distance d based on the arc-cosine similarity between the rows of factor matrices U and V. We found that this approach performs better than defining the distance based on the rows and columns of the original rating matrix. This choice deems better probably due to the fact that many pairs of users (or item pairs) have almost no shared items because of the high sparsity of the rating matrix.
- Variance observed due to the sampling of anchor points: We omitted the discussion of this issue due to space constraints (3 pages).
- Effect of Nadaraya-Watson smoothing: we observed that the prediction quality is almost the same for anchor points and non-anchor points. In other words, the approximation due to the Nadaraya-Watson procedure does not seem to be the limiting factor.
- Hoelder continuity: this assumption is actually essential in our model since we smooth locally with respect to the distance d. We performed large deviation analysis which is also omitted due to the lack of space.
- We omitted references due to the strict page limit. We will add references to the most relevant work. Moreover, the long version of the submission includes substantial discussion and references of related work.
Please feel free to reply us if you have further questions.
Thank you. |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks. | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | 4eqD-9JEKn4Ea | review | 1,362,123,600,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"anonymous reviewer 4b7c"
] | ICLR.cc/2013/conference | 2013 | title: review of Matrix Approximation under Local Low-Rank Assumption
review: Matrix Approximation under Local Low-Rank Assumption
Paper summary
This paper deals with low-rank matrix approximation/completion. To reconstruct a matrix element M_{i,j}, the proposed method performs a weighted low rank matrix approximation which considers a similarity metric between matrix coordinates. More precisely, the weighting scheme emphasizes reconstruction errors close to the element {i,j} to reconstruct. As a computational speedup, the authors perform the low rank approximation only on a small set of coordinates and approximate the reconstruction for any coordinate through kernel estimation.
Review Summary
The core idea of the paper is interesting and could be helpful in many practical applications of low rank decomposition. The paper reads well and is technically correct. On the negative side, I feel that the availability of a meaningful similarity metric between coordinates should be discussed. The experimental section could be greatly improved. There is no reference to related work at all.
Review
In many application of matrix completion, the low rank decomposition algorithm is here to circumvent the fact that no meaningful similarity metric between coordinate pairs is available. For instance, if such a metric is available in a collaborative filtering scenario, one would simply take the input (customer, item) fetch its neighbors and average its ratings. Your algorithm presuppose the availability of such a metric, could you discuss this core aspect of your proposal in the paper?
Following on this idea, would you consider as baseline performing the Nadaraya-Watson kernel regression on the matrix itself and reports the result in your experimental section. This would be meaningful to quantify how much comes from the low rank smoothing and how much comes simply from the quality of the similarity metric.
Still in the experimental section,
- would you consider validating the kernel width?
- discuss the influence of the L2 regularizer which is not even introduced in the previous sections
- define clearly the d you use. To me d(s,s') compare two coordinate pairs and I do not know how to relate it to the arccos you are using, i.e. what are x,y?
- could you measure the variance observed due to the sampling of anchor points and could you report whether the reconstruction error is greater further from anchor points?
- how does Nadaraya-Watson smoothing compare with respect to solving the low rank problem for each point?
References:
- you should at least refer to weighted low rank matrix approximation (Srebro & Jaakkola, ICML-03). It would be good to refer to prior work on expanding low rank matric approximation, given how fertile this field has been in the Netflix prize days (Maximum Margin Matrix Factorizations, RBM for matrix completion...).
Details along the text
- In Eq. 1, to unify notation, you could use the projection Pi here as well
- Hoelder continuity: I do not understand how it relate to the smoothing kernel approach defined below. I believe this sentence could be removed. |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is only locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local low-rank modeling. Our experiments show improvements in prediction accuracy in recommendation tasks. | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | 9QsSQSzMpW9Ac | review | 1,362,191,520,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"anonymous reviewer 76ef"
] | ICLR.cc/2013/conference | 2013 | title: review of Matrix Approximation under Local Low-Rank Assumption
review: Approximation and completion of sparse matrices is a common task. As popularized by the Netflix prize, there are many possible approaches, and combinations of different styles of approach can lead to better predictions than individual methods. In this work, local prediction and low-rank factorization are combined as one coherent method.
This is a short paper, with an interesting idea, and some compelling results. It has the appealing property that one can almost guess what is going to come from the abstract. My key question while reading was how locality was going to be defined: one of the goals of low-rank learning is finding a space in which to represent entities. The paper uses a simple distance measure to local support points. I'm not sure if a value is missing from one row and not another whether it is ignored, or counted as zero. I wonder if an approach that finds a low-rank or factor model fit and uses that to define distances for local modelling might work better. Potentially one could iterate, after fitting the model, get improved distances and refit.
I find the large improvement over the Netflix prize winners surprising given the large effort invested over three years to get that result. Is one relatively simple method really sufficient to blow that away? I think open code and scrutiny would be required to be sure. (Honest mistakes are not unprecedented: http://arxiv.org/abs/1301.6659v2 ) It will be a great result if correct.
I found the complete lack of references distracting. There is clearly related work in this area. Some of it is even mentioned, just with no formal citations. This is a workshop submission for light touch review, but citations seem like a basic requirement for any scientific document.
Pros: neat idea, quick, to-the-point presentation.
Cons: I'm suspicious of the results, and would like to see a reference section. |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature. | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | gL2tL3lwAfLw1 | review | 1,363,968,300,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"Xavier Glorot, Antoine Bordes, Jason Weston, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments.
It is true that our model should be compared with (Jenatton et al., NIPS12). This model has been developed simultaneously as ours, that's why it has not been included in the first version. We added this reference and their results (LFM model) in a revised version of our abstract (http://arxiv.org/abs/1301.3485v2).
Unfortunately, SME is slightly outperformed by LFM on Kinships and Nations and equivalent on UMLS. Still, we believe that this work would make an interesting presentation at ICLR. First, together with LFM, SME is the only current method that can scale to large number of relation types (and both have been developed at the same time). The LFM paper actually displays an experiment on data with 5k relation types on which LFM and SME perform similarly. Second, contrary to all previous methods, SME models relation types as vectors, lying in the space as entities. From a conceptual viewpoint, this is powerful, since it models any relation types as a standard entity (and vice-versa). Hence, SME is the only method that could be applied on data for which any entity can also create relationships between other entities.
We now also compare our model with CANDECOMP-PARAFAC (CP), a standard tensor factorization method. |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature. | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | ibXkikDckabeu | review | 1,362,123,900,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"anonymous reviewer 428a"
] | ICLR.cc/2013/conference | 2013 | title: review of A Semantic Matching Energy Function for Learning with Multi-relational
Data
review: Semantic Matching Energy Function for Learning with Multi-Relational Data
Paper Summary
This paper deals with learning an energy model over 3-way relationships. Each entity in the relation is associated a low dimensional representation and a neural network associate a real value to each representation triplet. The learning algorithm relies on an online ranking loss. Two models are proposed a linear model and a bilinear model.
Review Summary
The paper is clear and reads well. Its use of the ranking loss function to this problem is an interesting proposition. It could give more details on the ranking loss and the training procedure. The experiments could also be more thorough. My main concern however is that references to a co-author own work have been omitted. This omission means that the authors pretend not to know that a model with better reported performance exists. This should be discouraged and I will recommend the rejection of the paper.
Review
This paper is part of the recent effort of using distributed representation and various loss function for learning relational models. Papers focusing on this line of research include work from A. Bordes, J. Weston and Y. Bengio:
- A Latent Factor Model for Highly Multi-relational Data (NIPS 2012).
Rodolphe Jenatton, Nicolas Le Roux, Antoine Bordes and Guillaume Obozinski.
- Learning Structured Embeddings of Knowledge Bases (AAAI 2011).
Antoine Bordes, Jason Weston, Ronan Collobert and Yoshua Bengio.
The variations among this paper mainly involves
- model regularization (low rank, parameter tying...)
- loss function
Regarding regularization, your proposition, Jenatton et al, and RESCAL are highly related. Basically your bilinear model seems to introduce a rank constrain on the 3D tensor representing all the relation {R_k, forall k} as in RESCAL notation. Basically your bilinar model decomposes R_k = (E_{rel,k} W_l) (W_r E_{rel,k})^T while the NIPS2012 model decomposes R_k as a linear combination of rank one matrices shared accross relations. Like Jenatton et al, you brake the symetry of the left and right relations.
Regarding loss, RESCAL uses MSE, Jenatton et al uses logistic loss and you use a ranking loss.
This differences result in different AUCs. Jenatton et al is always better, RESCAL and your model are close. Given that Jenatton et al and RESCAL precede your submission. I feel it is necessary to check one thing at a time, i.e. training a model parameterized like RESCAL / Jenatton et al / yours with all three losses. This would give the best combination. This could give an empirical advantage to your ideas (either parameterization or ranking loss) over Jenatton et al.
Given that your model is worse in terms of AUC compared to Jenatton et al. I feel that you should at least explain why and maybe highlight some other advantages of your approach. I am disappointed that you do not refer to Jenatton et al: you know about this paper (shared co-author), the results on the same data are better and you do not even mention it.
Typos/Details
Intro: unlike in previous work -> put citation here.
2.2 (2) even if it remains low dimensional, nothing forces the dimension of ... -> barely understandable rephrase this sentence. |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced. The network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components. We empirically show that it reaches competitive performance in link prediction on standard datasets from the literature. | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | fjenfiFhEZfLM | review | 1,362,379,680,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"anonymous reviewer cae2"
] | ICLR.cc/2013/conference | 2013 | title: review of A Semantic Matching Energy Function for Learning with Multi-relational
Data
review: The paper proposes two functions for assigning energies to triples of
entities, represented as vectors. One energy function essentially
adds the vectors of the relations and the entities, while another
energy function computes a tensor product of the relation and both
entities. The new energy functions appear to beat other methods.
The main weakness is in the relative lack of novelty. The paper
proposes a slightly different neural network architecture for
computing energies of object triplets from the ones that existed
before, but its advantage over these architectures hasn't been
demonstrated conclusively. How does it compare to a simple tensor
factorization? (or even a factorization that computes an energy with a
3-way inner product sum_i a_i R_i b_i? such a factorization embeds
entities and relations in the same space) Without this comparison, the
new energy function is merely a 'new neural network architecture' that
is not shown to outperform other architectures. And indeed, the the
performance of a simple Tensor factorization method matches the
results of the more sophisticated factorization method that is
proposed here, on the datasets from [6] that overlap with the datasets
here (namely, UML and kinship).
In general, new energy functions or architectures are worthwhile only
when they reliably improve performance (like the recently introduced
maxout networks) or when they have other desirable properties, such as
interpretability or simplicity. The energy function proposed here is
more complex than a simple tensor factorization method which appears
to work just as well.
Pros
- New energy function, method appears to work well
Cons
- The architecture is not compared against simpler architectures,
and there is evidence that the simpler architectures achieve
identical performance. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | DtAvRX423kRIf | review | 1,361,903,280,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: This is an interesting investigation and I only have remarks to make regarding the CIFAR-10 and CIFAR-100 results and the rapidly moving state-of-the-art (SOTA). In particular, on CIFAR-100, the 56.29% accuracy is not state-of-the-art anymore (thankfully, our field is moving fast!). There was first the result by Zeiler & Fergus using stochastic pooling, bringing the SOTA to 57.49% accuracy. Then, using another form of pooling innovation (max-linear pooling units with dropout, which we call maxout units), we brought the SOTA on CIFAR-100 to 61.43% accuracy. On CIFAR-10, maxout networks also beat the SOTA, bringing it to 87.07% accuracy. All these are of course without using any deformations.
You can find these in this arxiv paper (which appeared after your submission): http://arxiv.org/abs/1302.4389
Maxout units also use linear filters pooled with a max, but without the positivity constraint. We found that using dropout on the max output makes a huge difference in performance, so you may want to try that as well. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 4w1kwHXszr4D8 | review | 1,362,138,060,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 2426"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: This paper proposes a method to jointly train a pooling layer and a classifier in a supervised way.
The idea is to first extract some features and then train a 2 layer neural net by backpropagation (although in practice they use l-bfgs). The first layer is linear and the parameters are box constrained and regularized to be spatially smooth. The authors propose also several little tricks to speed up training (divide the space into smaller pools, partition the features, etc.).
Most relevant work related to this method is cited but some references are missing.
For instance, learning pooling (and unappealing) regions was also proposed by Zeiler et al. in an unsupervised setting:
Differentiable Pooling for Hierarchical Feature Learning
Matthew D. Zeiler and Rob Fergus
arXiv:1207.0151v1 (July 3, 2012)
See below for other missing references.
The overall novelty is limited but sufficient. In my opinion the most novel piece in this work is the choice of the regularizer that enforces smoothness in the weights of the pooling. This regularization term is not new per se, but its application to learning filters certainly is.
The overall quality is fair. The paper lacks clarity in some parts and the empirical validation is ok but not great.
I wish the authors stressed more the importance of the weight regularization and analyzed that part a bit more in depth instead of focussing on other aspects of their method which seem less exciting actually.
PROS
+ nice idea to regularize weights promoting spatial smoothness
+ nice visualization of the learned parameters
CONS
- novelty is limited and the overall method relies on heuristics to improve its scalability
- empirical validation is ok but not state of the art as claimed
- some parts of the paper are not clear
- some references are missing
Detailed comments:
- The notation in sec. 2.2 could be improved. In particular, it seems to me that pooling is just a linear projection subject to constraints in the parameterization. The authors mentions that constraints are used just for interpretability but I think they are actually important to make the system 'less unidentifiable' (since it is the composition of two linear stages).
Regarding the box constraints, I really do not understand how the authors modified l-bfgs to account for these box constraints since this is an unconstrained optimization method. A detailed explanation is required for making this method reproducible. Besides, why not making the weights non-negative and sum to one instead?
- The pre-pooling step is unsatisfying because it seems to defeat the whole purpose of the method. Effectively, there seem to be too many other little tricks that need to be in place to make this method competitive.
- Other people have reported better accuracy on these datasets. For instance,
Practical Bayesian Optimization of Machine Learning Algorithms
Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams
Neural Information Processing Systems, 2012
- There are lots of imprecise claims:
- convolutional nets before HMAX and SPM used pooling and they actually learned weights in the average pooling/subsampling step
- 'logistic function' in pag. 3 should be 'softmax function'
- the contrast with the work by Le et al. on pag.4 is weak since although pooling regions can be trained in parallel but the classifier trained on the top of them has to be done afterwards. This sequential step makes the whole procedure less parallelizable.
- second paragraph of sec. 3.2 about 'transfer pooling regions' is not clear. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | uEhruhQZrGeZw | review | 1,361,927,280,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | review: PS. After reading some of the other comments, I see that I was wrong about the weights in the linear layer being possibly negative. I actually wasn't able to find the part of the paper that specifies this. I think in general the paper could be improved by being a little bit more straightforward. The method is very simple but it's difficult to tell exactly what the method is from reading the paper.
I definitely agree with Yann LeCun that the smoothness prior is interesting and should be explored in more detail. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 6tLOt5yk_I6cd | review | 1,363,741,140,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | review: I'm not sure why the authors are claiming state of the art on CIFAR-10 in their response, because the paper doesn't make this claim and I don't see any update to the paper. The method does not actually have state of the art on CIFAR-10 even under the constraint that it follow the architecture considered in the paper. It's nearly as good as Jia and Huang's method but not quite as good.
Back-propagation over the max operator may be possible, but how would you parameterize the max to include or exclude different input features? Each max pooling unit needs to take the max over some subset of the detector layer features. Since including or excluding a feature in the max is a hard 0/1 decision it's not obvious how to learn those subsets using your gradient based method.
Regarding the competitiveness of CIFAR-100: This is not a very important point because CIFAR-100 being competitive or not doesn't enter much into my evaluation of the paper. It's still true that the proposed method beats Jia and Huang on that dataset. However, I do think that my opinion of CIFAR-100 as being less competitive than CIFAR-10 is justified. I'm aware that CIFAR-100 has fewer examples per class and that this explains why the error rates published on that dataset are higher. My reason for considering it less competitive is that the top two papers on CIFAR-100 right now both say that they didn't even bother optimizing their hyperparameters for that dataset. Presumably, anyone could easily get a better result on that dataset just by downloading the code for one of those papers and playing with the hyperparameters for a day or two. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | ttaRtzuy2NtjF | review | 1,360,139,640,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: As far as I can tell, the algorithm in section 2.2 (pooling + linear classifier) is essentially a 2-layer neural net trained with backprop, except that the hidden layer is linear with positive weights.
The only innovation seems to be the weight spatial smoothness regularizer of section 2.3. I think this should be emphasized.
Question: why use LBFGS when a simple stochastic gradient would have been simpler and probably faster?
The introduction seems to suggest that pooling appeared with [Riesenhuber and Poggio 2009] and [Koenderink and van Doorn 1999], but models of vision with pooling (even multiple levesl of pooling) can be found in the neo-cognitron model [Fukushima 1980] and in convolutional networks [LeCun et al. 1990, and pretty much every subsequent paper on convolutional nets].
The origin of the idea can be traced to the 'complex cell' model from Hubel and Wiesel's classic work on the cat's primary visual cortex [Hubel and Wiesel 1962].
You might also be interested in [Boureau et al. ICML 2010] 'A theoretical analysis of feature pooling in vision algorithms'. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | ddaBUNcnvHrLK | review | 1,361,922,660,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Ian Goodfellow"
] | ICLR.cc/2013/conference | 2013 | review: This is a follow-up to Yoshua Bengio's comment. I'm lead author on the paper that he linked to.
One reason that Zeiler & Fergus got good results on CIFAR-100 with stochastic max pooling and my co-authors and I got good results on CIFAR-100 with maxout is that we were both using deep architectures. I think there's room to ask the scientific question 'how well can we do with one layer, just by being more clever about how to do the pooling?' even if this doesn't immediately lead to better answers to the engineering question, 'how can we get the best possible numbers on CIFAR-100?' So it's important to evaluate Malinowski and Fritz's method in the context of it being constrained to using a single-layer architecture.
On the other hand, it's not obvious to me that Malinowski and Fritz's training procedure would generalize to deeper achitectures, since the current implementation assumes that the output of the pooling layer is connected directly to the classification layer. It would be interesting to investigate whether this strategy (and Jia and Huang's strategy) works for deeper architectures. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 0IOVI1hnXH0m- | review | 1,362,196,620,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer c1a0"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: The paper presents a method for training pooling regions in image classification pipelines (similar to those that employ bag-of-words or spatial pyramid models). The system uses a linear pooling matrix to parametrize the pooling units and follows them with a linear classifier. The pooling units are then trained jointly with the classifier. Several strategies for regularizing the training of the pooling parameters are proposed in addition to several tricks to increase scalability. Results are presented on the CIFAR10 and CIFAR100 datasets.
The main idea here appears to be to replace the 'hard coded' average pooling stage + linear classifier with a trainable linear pooling stage + linear classifier. Though I see why this is natural, it is not clear to me why using two linear stages is advantageous here since the combined system is no more powerful than connecting the linear classifier directly to all the features. The two main advantages of competing approaches are that they can dramatically reduce dimensionality or identify features to combine with nonlinear pooling operations. It could be that the performance advantage of this approach (without regularization) comes from directly learning the linear classifier from all the feature values (and thus the classifier has lower bias).
The proposed regularization schemes applied to the pooling units potentially change the picture. Indeed the authors found that a 'smoothness' penalty (which enforces some spatial coherence on the pooling weights) was useful to regularize the system, which is quite similar to what is achieved using hand-coded pooling areas. The advantage is that the classifier is given the flexibility to choose other weights for all of the feature values while retaining regularization that is similar to hand-coded pooling. How useful this effect is in general seems worth exploring in more detail.
Pros:
(1) Potentially interesting analysis of regularization schemes to learn weighted pooling units.
(2) Tricks for pre-training the pooling units in batches and transferring the results to other datasets.
Cons:
(1) The method does not appear to add much power beyond the ability to specify prior knowledge about the smoothness of the weights along the spatial dimensions.
(2) The results show some improvement on CIFAR-100, but it is not clear that this could not be achieved simply due to the greater number of classifier parameters (as opposed to the pooling methods proposed in the paper.) |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | L9s74sx8Ka9cP | comment | 1,363,751,520,000 | 6tLOt5yk_I6cd | [
"everyone"
] | [
"Mateusz Malinowski"
] | ICLR.cc/2013/conference | 2013 | reply: As Table 1 shows our method gives similar results to Jia's method (79.6% and 80.17% accuracy). If we allow transfer between datasets, our method gives slightly better results (Table 5 reports 80.35% test accuracy for our method).
We could weight features with real-valued weights constrained to unit cube, and next use max-operator. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | bYfTY-ABwrbB2 | review | 1,363,737,660,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Mateusz Malinowski, Mario Fritz"
] | ICLR.cc/2013/conference | 2013 | review: We thank all the reviewers for their comments.
We will include suggested papers on related work and origins of pooling architectures as well as improvement on the state of the art that occurred in the meanwhile.
The reviewers acknowledge our analysis of regularization schemes to learn weighted pooling units together with a regularizer that promotes spatial smoothness.
Our work aims at replacing hand-crafted pooling stage in computer vision architectures ([1], [2], [3] and [4]) where the pooling is a way to reduce dimensionality of the features while preserving spatial information. Handcrafted spatial pooling schemes that operate on an image-level are still part of many state of the art architectures. In particular, recent approaches that aim at higher-level semantic representations (e.g. [3], [6]) follow this paradigm and are within the scope of our method. We therefore believe that our method will find wide applicability in those scenarios.
Anonymous 45d8:
We don't agree that CIFAR-100 is less-competitive as the state-of-the-art results are lower than CIFAR-10, moreover CIFAR-100 contains fewer examples per class for training and 10x more classes.
We are not restricted to sum pooling as back-propagation over the max operator is possible.
We use non-negativity constraint for the weights as Formula 5 shows.
Sparsity constraint on the weights has no computational benefits at test time as the weighted sum ranges over the whole image.
Concerning the remarks about increased computation time, we would like to point out that computational costs are dominated by the coding procedure. The pooling stage - hand-crafted or learnt - is on the order of milliseconds per image.
The connection between the matrix factorization of the weights of the softmax classifier and pooling stage is an interesting additional observation, however, the paper analyzes the regularization terms of the pooling operator and therefore regularization of the factorized weight matrix.
In our work we want to make our architecture consistent with other computer vision architectures that use image-level pooling stage ([1], [2], [3], [4] and [5]) exploiting the shared representation among classes and computational benefits of this method.
Anonymous 2426:
The method produces state-of-the-art results, at the time of submission, on CIFAR-100 and state-of-the-art on both CIFAR-10 and CIFAR-100 given SPM architecture ([1], [2], [4]).
As our results show, the smoothness constrain/regularization is the most crucial (Table 3), non-negativity constraint though increases the interpretability of the results. We use lbfgs with projection onto a unit box after every weights update.
Although some of our speed-ups to make the system more scalable are heuristic, they are appreciated e.g. by 'Anonymous c1a0' and share similarities with recently proposed approaches for scalable learning as we reference in the paper.
Anonymous c1a0:
Increasing number of classification parameters in the SPM architecture ([1], [2], [4]) requires a bigger codebooks which increases the complexity of encoding step as every image patch has to be assigned to a cluster via triangle coding [4]. This would lead to a significant increase at test time. On the other hand, our architecture adds little overhead compared to SPM architectures at the test time.
Anonymous 45d8 & Anonymous 2426:
The pre-pooling step is pooling over a small neighborhood (over a 3x3 neighboring pixels), and therefore can be seen as form of weight sharing. This is a technical detail in order to reduce memory consumption and training time. This doesn't defy the main argument given in the paper as pooling is learnt over larger areas. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | xEdmrekMJsvCj | review | 1,361,914,920,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: Summary:
The paper proposes to replace the final stages of Coates and Ng's CIFAR-10 classification pipeline. In place of the hand-designed 3x3 mean pooling layer, the paper proposes to learn a pooling layer. In place of the SVM, the paper proposes to use softmax regression jointly trained with the pooling layer.
The most similar prior work is Jia and Huang's learned pooling system. Jia and Huang use a different means of learning the pooling layer, and train a separate logistic regression classifier for each class instead of using one softmax model.
The specific method proposed here for learning the pooling layer is to make the pooling layer a densely connected linear layer in an MLP and train it jointly with the softmax layer.
The proposed method doesn't work quite as well as Jia and Huang's on the CIFAR-10 dataset, but does beat them on the less-competitive CIFAR-100 benchmark.
Pros:
-The method is fairly simple and straightforward
-The method improves on the state of the art of CIFAR-100 (at the time of submission, there are now two better methods known to this reviewer)
Cons:
-I think it's somewhat misleading to call this operation pooling, for the following reasons:
1) It doesn't allow learning how to max-pool, as Jia and Huang's method does. It's sort of like mean pooling, but since the weights can be negative it's not even really a weighted average.
2) Since the weights aren't necessarily sparse, this loses most of the computational benefit of pooling, where each output is computed as a function of just a few inputs. The only real computational benefit is that you can set the hyperparameters to make the output smaller than the input, but that's true of convolutional layers too.
-A densely connected linear layer followed by a softmax layer is representationally equivalent to a softmax layer with a factorized weight matrix. Any improvements in performance from using this method are therefore due to regularizing a softmax model better. The paper doesn't explore this connection at all.
-The paper doesn't do proper controls. For example, their smoothness prior might explain their entire success. Just applying the smoothness prior to the softmax model directly might work just as well as factoring the softmax weights and applying the smoothness prior to one factor.
-While the paper says repeatedly that their method makes few assumptions about the geometry of the pools, their 'pre-pooling' step seems to make most of the same assumptions as Jia and Huang, and as far as I can tell includes Coates and Ng's method as a special case. |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -- including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms used in the proposed model together with an efficient method to train them. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on the latter. | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | mdD47o8J4hmr1 | review | 1,360,973,580,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Mateusz Malinowski"
] | ICLR.cc/2013/conference | 2013 | review: Our paper addresses the shortcomings of fixed and data-independent pooling regions in architectures such as Spatial Pyramid Matching [Lazebnik et. al., 'Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories', CVPR 2006], where dictionary-based features are pooled over large neighborhood. In our work we propose an alternative data-driven approach for the pooling stage, and there are three main novelties of our work.
First of all, we base our work on the popular Spatial Pyramid Matching architectures and generalize the pooling operator allowing for joint and discriminative training of both the classifier together with the pooling operator. The realization of the idea necessary for training is essentially an Artificial Neural Network with dense connections between pooling units with the classifier, and the pooling units connected with the high-dimensional dictionary-based features. Therefore, back-propagation and the Neural Network interpretation should rather be considered here as a tool to achieve joint and data-dependent training of the parameters of the pooling operator and the classifier. Moreover, our parameterization allows for the interpretation in terms of spatial regions. The proposed architecture is an alternative to another discriminatively trained architecture presented by Jia et. al. ['Beyond spatial pyramids: Receptive field learning for pooled image features' CVPR 2012 and NIPS workshop 2011] outperforming the latter on the CIFAR-100 dataset.
Secondly, as opposed to the previous Spatial Pyramid Matching schemes, we don't constrain the pooling regions to be the identical for all coordinates of the code.
Lastly, as you've said, we investigate regularization terms. The popular spatial pyramid matching architectures which we generalize in this paper are typically used to pool over large spatial regions. In combination with our code-specific pooling scheme this leads to a large number of parameters that call for regularization. In our investigations of different regularizers it turns out that a smoothness regularizer is key to strong performance for this type of architecture on CIFAR-10 and CIFAR-100 datasets.
Concerning LBFGS vs SGD: We have chosen LBFGS out of convenience, as it tends to have fewer parameters.
Thanks for pointing out missing references. |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation. | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | Oel6vaaN-neNQ | review | 1,362,279,120,000 | 5Qbn4E0Njz4Si | [
"everyone"
] | [
"anonymous reviewer 7984"
] | ICLR.cc/2013/conference | 2013 | title: review of Hierarchical Data Representation Model - Multi-layer NMF
review: The paper proposes to stack NMF models on top of each other. At each level, a non-linear function of normalized decomposition coefficients is used and decomposed using another NMF.
This is essentially an instance of a deep belief network, where the unsupervised learning part is done using NMF, which, to the best of my knowledge had not been done before.
The new method is then applied to document data where a hierarchy of topics seems to be discovered. Applications are also shown on reconstructing digits.
The extended abstract however does not give many details on all the specifics of the method.
Comments:
-It would have been nice (a) to relate the hierachy to existing topic models [A,B], and (b) to see more topics.
-On Figure 2, why are reconstruction errors decreasing with the number of features?
-On the digits, the differences between shallow and deep networks are not clear.
[A] D. Blei, T. Griffiths, and M. Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57:2 1–30, 2010.
[B] R. Jenatton, J. Mairal, G. Obozinski, F. Bach. Proximal Methods for Hierarchical Sparse Coding. Journal of Machine Learning Research, 12, 2297-2334, 2011.
Pros:
-Interesting idea of stacking NMFs.
Cons:
-Experimental results are interesting but not great. What is exactly achieved is not clear. |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation. | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | ZIE1IP5KlJTK- | review | 1,362,127,980,000 | 5Qbn4E0Njz4Si | [
"everyone"
] | [
"anonymous reviewer d1c1"
] | ICLR.cc/2013/conference | 2013 | title: review of Hierarchical Data Representation Model - Multi-layer NMF
review: This paper proposes a multilayer architecture based upon stacking non-negative matrix factorization modules and fine-tuning the entire architecture with reconstruction error. Experiments on text classification and MNIST reconstruction demonstrate the approach.
During layer-wise initialization of the multilayer architecture NMF is performed to obtain a low-rank approximation to the input. The output of an NMF linear transform passes through a nonlinearity to form the input to the subsequent layer. These nonlinear outputs of a layer are K = f(H) where f(.) is a nonlinear function and H are linear responses of the input. During joint network training, a squared reconstruction error objective is used. Decoding the final hidden layer representation back into the input space is performed with explicit inversions of the nonlinear function f(.). Overall, the notation and description of the multi-layer architecture (section 3) is quite unclear. It would be difficult to implement the proposed architecture based only upon this description
Experiments on Reuters text classification and MNIST primarily focus on reconstruction error and visualizing similarities discovered by the model. The text similarities are interesting, but showing a single learned concept does not sufficiently demonstrate the model's ability to learn interesting structure. MNIST visualizations are again interesting, but the lack of MNIST classification results is strange given the popularity of the dataset. Finally, no experiments compare to other models e.g. simple sparse auto-encoders to serve as a baseline for the proposed algorithm.
Notes:
-The abstract should be included as part of the paper
- Matlab notation is paragraph 2 of section 2 is a bit strange. Standard linear algebra notation (e.g. I instead of eye) is more clear in this case
- 'smoothen' -> smooth or apply smoothing to
Summary:
- A stacking architecture based upon NMF is interesting
- The proposed architecture is not described well. Others would have difficulty replicating the model.
- Experiments do not compare to sufficient baselines or other layer-wise feature learners.
- Experiments and visualizations do not sufficiently demonstrate the claim that NMF-based feature hierarchies are easier to interpret |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation. | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | -B7o-Yy0XjB0_ | comment | 1,363,334,160,000 | Oel6vaaN-neNQ | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Points on the con that experimental results are not great:
When we refer to Figure 2 in the paper, the proposed hierarchical feature extraction method results in much better classification and reconstruction performance, especially for small number of features.
It can be interpreted that, by taking hierarchical stages in reducing dimensions, our proposed method successfully finds more meaningful and helpful features in aspect of representing the data, compared to reducing dimensions at one step. |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation. | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | APRX62OnXa6nY | comment | 1,363,255,140,000 | ZIE1IP5KlJTK- | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Description of proposed architecture:
Sorry about insufficient description of the proposed architecture! We had to fit all of the content into 3 pages.. We added more details on the architecture of our network, which includes actual computations involved in implementing the network in Appendix.
- Comparison with other baselines:
In this paper, we concentrated on proving our hypothesis that extending NMF into several layers will discover feature hierarchies present in the data, and provide better and meaningful features, compared to the standard shallow one (self-comparison). Since we wanted to observe the behavioral change when extended into several layers, we solely compared the result before and after stacking layers. We regarded comparing the performance with other baselines or layer-wise feature learners as not so important because other baselines do not provide us intuitive demonstration of the feature hierarchies. However, like your comment, we think that it may be meaningful to compare with the other feature learners and see if our proposed network can function as a simple feature extraction algorithm (without considering discovery of feature hierarchies).
- Experiments and visualizations do not sufficiently demonstrate the claim that NMF-based feature hierarchies are easier to interpret:
Through our proposed research, we wanted to prove that hierarchical learning with NMFs can present intuitive feature hierarchies by learning feature relationships across the layers.
We think our proposed network provides meaningful feature hierarchies compared to other networks (not necessarily easier interpretation) because:
a) compared to other feature learning networks that does not restrict the sign of the data, our proposed network intuitively represents learned features with the non-negativity property.
b) while shallow feature learning networks that is able to demonstrate features intuitively (ex. by restricting non-negativity constraint, NMF or other topic models), it learns relationships of features which develop into hierarchies.
c) although some recent topic models provides topic hierarchies in intuitive manner, the application is restricted to only document data. However, our proposed network can be applied to any types of data with non-negative signs, not just documents (it can be used to learn underlying feature hierarchies present in image, as well.)
With the experimental results in this paper, we are aware of the fact that interpretation of sub-class topics may not seem clear; we showed how words in the first layer features differ slightly from each other in terms of content, but develop into the same broad topic class. However, we believe that this has shown a good signal of potential for development. In order to reinforce our claim and strongly support the function of the proposed network, we would like to look for a text document set that provides ground-truth label of sub-categories as well.
- We included abstracts in the revised version, and corrected the notes you made above! Thanks!
- Remainder: The revised version will be available at Fri, 15 Mar 2013 00:00:00 GMT. |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple unit algorithm into several layers to take step-by-step approach in learning. By utilizing NMF as unit algorithm, our proposed network provides intuitive understanding of the learning process. It is able to demonstrate hierarchical feature development process and also discover and represent feature hierarchies in the complex data in intuitively understandable manner. We apply hierarchical multi-layer NMF to image data and document data to demonstrate feature hierarchies present in the complex data. Furthermore, we analyze the reconstruction and classification abilities of our proposed network and prove that hierarchical feature learning approach excels performance of standard shallow network. By providing underlying feature hierarchies in complex real-world data sets, our proposed network is expected to help machines develop intelligence based on the learned relationship between concepts, and at the same time, perform better with the small number of features provided for data representation. | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | CC-TCptvxlrvi | comment | 1,363,255,380,000 | Oel6vaaN-neNQ | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Details on the specifics of the method:
Sorry for the insufficient explanations on the method. We had to fit into 3 page limit.. We added detailed explanation of the method and computation in Appendix.
- Hierarchies by topic models [A,B]:
Thanks for the recommendation! In this paper, we focused on the general property of the hierarchical learning of the proposed network, regardless of types of data set (whether it is document set or image set, etc). This is the reason why we did not compare our result with any of other topic model result. However, we think it is meaningful to carefully observe how it works in different types of data sets in more detail. As furtherwork, we would like to look into more details on the function of our proposed network in terms of document dataset application by comparing the result with [A,B].
- On Figure 2, the x-axis represents the number of features, and also, the dimensions provided for data representation in H. If we increase the number of features provided for learning, the network learns features separately so that it can come up with more exact reconstruction of the original data. (For example, if we restrict the number of feature to one, the network has to cram the essential parts necessary for data representation into one, and it is hard to represent exactly what we want using just one feature or building block. However, if we provide sufficient number of features, network learns several essential parts separately, which means more number of more accurate building blocks and it will be easier to represent data by making use of the necessary features or building blocks, which may lead to more accurate reconstruction of the data) This is the reason why reconstruction error decreases with respect to increasing number of features.
- MNIST dataset difference between shallow and deep network:
Sorry for the small image! Instead of showing whole 0-9 MNIST digit reconstruction, we enlarged and focused on the image to a few example of digits that show clear difference between the shallow and deep network.
- What is achieved by the research?
There are mainly two contributions of our work: By taking step-by-step approach in learning of features using NMFs, 1. we discovered the relationships between low level features and high level features, and intuitively demonstrated class hierarchies present in the data, and additionally, 2. we learned more meaningful features which lead to better distributed data representation, which results in better classification and reconstruction performance (provided insufficient dimensions for data representation).
By extending NMF into several layers, we proposed a way to discover intuitive concept hierarchies by learning relationships between features, regardless of types of data set (while topic models are focused on revealing concept hierarchies of document set only, our proposed network can handle any types non-negative data sets.). With the experiments of comparison with the shallow network, we also proved that taking step-by-step approach in learning benefits feature learning as well. (this will be supporting evidence of further application of the proposed network)
- Remainder: The revised version will be available at Fri, 15 Mar 2013 00:00:00 GMT. |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model. | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | D3uj2h4TUE2ce | review | 1,363,179,420,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"Felix Bauer"
] | ICLR.cc/2013/conference | 2013 | review: Points raised by reviewers:
reviewer 43a2:
(1) Good classification of rotations and scale are reported in Table 1, unfortunately these appear to be on toy, not natural, images. Impressive grouping of complex transformations such as translations and rotations are shown in Figure 3.
(2) While the gabors learned on natural image patches are interesting it is hard to judge them without knowing how large they are. These details seem to be omitted from the paper.
(3) It is not immediately obvious what applications would benefit from this type of model. It also seems like it could be relatively expensive computationally, and there was no mention of timing versus the standard gated Boltzmann machine model.
reviewer ea89:
(4) Please write the formulas for the full model that you train, not just the encoding. Even though they exist in other papers, they are not so complicated to write them down here.
(5) You say that the pinwheel patterns don't appear in rodents because they don't have a binocular vision. However you haven't actually obtained the pinwheels from binocularity but from video.
(6) The formula (9) is unclear and should be fixed. For example, how come the f index appears only once?
(6a) Figure 3: What is index of parameter set? In text you talk about different datasizes - where are the results for these?
reviewer cce5:
(7a) * Targets somewhat of a 'niche audience'; may be less accessible to the general representation learning community
(7b) * Presents a lot of qualitative but not quantitative results
(8) Fig 2: It's difficult to read/understand the frequency scale (left, bottom); it seems that frequency has been discretized; what do these bins represent, and how are they constructed?
(9) In section 2.1, could you be more explicit about what you mean by 'matching' the input filters (and the output filters). I assume the matching is referring to the connectivity by connecting filters to mapping units? Matching comes up again in Section 3, so it would help to clarify this early on.
(10) Check equation 8: what happened to C - should it not show up there?
Our response:
We thank the reviewers for their comments and suggestions. We submitted an updated version of the paper in which we address these points:
(1, 7b) We agree. While the toy results do suggest that the reduction in the number of parameters caused by grouping helps generalize, real world applications like activity recognition work much better with local receptive fields and pooling, which we feel is much too complicated for a first paper in this direction.
(2) We updated the paper to include a more detailed description of the datasets and experiments.
X (3) We included a discussion of computational complexity. The complexity scales with the square of the group size (which is typically small, like 5). This is not a huge increase in complexity because without grouping the model has to account for the equivalent number of products by replicating filters.
(4) We included the equations as suggested.
(5) This is a good point, and we now clarify this in the updated version of the paper. Binocular stimuli, like video, are dominated by local translation, and the exact same biological mechanisms have traditionally been assumed to model both (multiview complex cells). A simple GBM has in fact been applied to binocular 3D inference tasks in the past (eg. 'Stereopsis via Deep Learning', Memisevic, Conrad, 2011).
(6) We fixed this in the updated version.
(6a): We now describe the figure in more detail in the updated version. (Along the x-axis we show models with varying numbers of factors and mapping units)
(6a): We now describe the figure in more detail in the updated version. (Along the x-axis we show models with varying numbers of factors and mapping units)
(7a): While the model and experiments we discuss in the paper are very specific and technical, we see the main contribution of the paper in explaining concisely why square-pooling, group sparse coding and topographic feature models learn to group frequency, orientation and position and not phase. While it is well-known that they show this behaviour we show that thinking of squares as representing transformations can explain why. We rewrote the text to make this point clearer.
(8) They are DFT bins: We generate the plots by performing a 2d FFT on the learned Fourier filters (note that, since we train on translations in this experiment, we get Fourier components rather than Gabor features). We clarified this in the updated version.
(9) We use 'matching' synonymous with 'multiplying'. Indeed, this means that 'matched' filters are those whose product gets fed into a mapping unit (along with other products). We clarified this in the updated version.
(10) Yes. We fixed this in the updated version. |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model. | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | VlvAlDIDt_Sa0 | review | 1,362,171,300,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer ea89"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: The model presented in this paper is an extension of a previous model that extracts features from images, and these featuers are multiplied together to extract motion information (or other relation between two images). The novelty is to connect each feature of one image to several features of other image. This reuses the features. Futher, these connections are made in groups, and the features in the group will learn to have related properties. With overlaping groups one obtains pinwheel patterns observed in visual cortex. This is a different mechanism then previous ones.
- Please write the formulas for the full model that you train, not just the encoding. Even though they exist in other papers, they are not so complicated to write them down here.
- You say that the pinwheel patterns don't appear in rodents because they don't have a binocular vision. However you haven't actually obtained the pinwheels from binocularity but from video.
- The formula (9) is unclear and should be fixed. For example, how come the f index appears only once?
- Figure 3: What is index of parameter set? In text you talk about different datasizes - where are the results for these? |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model. | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | yTWI4b3EnB4CU | review | 1,361,968,140,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer 43a2"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: This paper introduces a group-gated Boltzmann machine for learning the transformations between a pair of images more efficiently than with a standard gated Boltzmann machine. Experiments show the model learns phase invariant complex cells-like units grouped by frequency and orientation. These groups can also be manipulated to include overlapping neighbors in which case the model learns topographic pinwheel layouts of orientation, frequency and phase. The paper also mentions how the model is related to squared-pooling used in other learning methods.
Pros
Interesting idea to add an additional connectivity matrix to the factors to enforce grouping behavior in a gated RBM. This is shown to be beneficial for learning translation invariant groups which are stable for frequency and orientation.
Good classification of rotations and scale are reported in Table 1, unfortunately these appear to be on toy, not natural, images. Impressive grouping of complex transformations such as translations and rotations are shown in Figure 3.
Figure 2 is a great figure. Clearly shows how a GRBM can represent all forms of frequency and orientation and combine these to represent translations. In general the paper was well written and has good explanatory figures.
Cons
While the gabors learned on natural image patches are interesting it is hard to judge them without knowing how large they are. These details seem to be omitted from the paper.
It is not immediately obvious what applications would benefit from this type of model. It also seems like it could be relatively expensive computationally, and there was no mention of timing versus the standard gated Boltzmann machine model.
Novelty and Quality:
This extension to gated Boltzmann machines is novel in that it allows grouping of features and increases the modelling power because the model no longer needs multiple feature to do simple translations. The paper was well written overall. |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter maps follow naturally from training the model on image pairs. The model also helps explain why square-pooling models yield feature groups with similar grouping properties. Experimental results on synthetic image transformations show that spatially constrained gating is an effective way to reduce the number of parameters and thereby to regularize a transformation-learning model. | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | ah5kV2s_ULa20 | review | 1,362,214,680,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer cce5"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: This paper proposes a novel generalization of the Gated Boltzmann Machine. Unlike a traditional GBM, this model is constrained in a way that hidden units that are grouped together (groupings defined a priori) can gate each other's connections. The model is shown to produce group structure in the learned representations (topographic feature maps) as well as frequency and orientation consistency of the filters within each group.
This paper is well written, presents a novel learning paradigm and is of interest to the representation learning community, especially those researchers interested in higher-order RBMs and transformation learning.
Positive points of the paper:
* Novelty
* Readability
* Treatment of an area (transformation learning) that is, in my opinion, worthy of more attention in the representation learning community
* Makes connections to the 'group sparse coding' literature (where other papers have proposed encouraging the squared responses of grouped filters to be similar)
* Makes a good effort to explain the observed phenomena (e.g. in discussing the filter responses)
Negative points of the paper:
* Targets somewhat of a 'niche audience'; may be less accessible to the general representation learning community
* Presents a lot of qualitative but not quantitative results
Overall, it's a nice paper.
Some specific comments:
Fig 2: It's difficult to read/understand the frequency scale (left, bottom); it seems that frequency has been discretized; what do these bins represent, and how are they constructed?
In section 2.1, could you be more explicit about what you mean by 'matching' the input filters (and the output filters). I assume the matching is referring to the connectivity by connecting filters to mapping units? Matching comes up again in Section 3, so it would help to clarify this early on.
Check equation 8: what happened to C - should it not show up there? |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | ChpzCSZ9zqCTR | review | 1,361,967,300,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"anonymous reviewer 9741"
] | ICLR.cc/2013/conference | 2013 | title: review of Big Neural Networks Waste Capacity
review: This papers show the effects of under-fitting in a neural network as the size of a single neural network layer increases. The overall model is composed of SIFT extraction, k-mean, and this single hidden layer neural network. The paper suggest that this under-fitting problem is due to optimization problems with stochastic gradient descent.
Pros
For a certain configurations of network architecture the paper shows under-fitting remains as the number of hidden units increases.
Cons
This paper makes many big assumptions:
1) that the training set of millions of images is labelled correctly.
2) training on sift features followed by kmeans retains enough information from the images in the training set to allow for proper learning to proceed.
3) a single hidden layer network is capable of completely fitting (or over-fitting) Imagenet.
While the idea seems novel, it does appear to be a little rushed. Perhaps more experimentation with larger models and directly on the input image would reveal more. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | MvRrJo2NhwMOE | review | 1,362,019,740,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"anonymous reviewer b2da"
] | ICLR.cc/2013/conference | 2013 | title: review of Big Neural Networks Waste Capacity
review: The net gets bigger, yet keeps underfitting the training set. Authors suspect that gradient descent is the culprit. An interesting study! |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | PPZdA2YqSgAq6 | review | 1,362,402,480,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"George Dahl"
] | ICLR.cc/2013/conference | 2013 | review: The authors speculate that the inability of additional units to reduce
the training error beyond a certain point in their experiments might
be because 'networks with more capacity have more local minima.' How
can this claim about local minima be reconciled with theoretical
asymptotic results that show that, for certain types of neural
networks, in the limit of infinite hidden units, the training problem
becomes convex?
As far as I can tell from the description of the experiments, they
used constant learning rates and no momentum. If getting the best
training error is the goal, in my experience I have found it crucial
to use momentum, especially if I am not shrinking the learning
rate. The experimental results would be far more convincing to me if
they used momentum or at least tried changing the learning rate during
training.
The learning curves in figure 3 show that larger nets reach a given
training error with drastically fewer updates than smaller nets. In
what sense is this an optimization failure? Without a more precise
notion of the capacity of a net and how it changes as hidden units are
added, the results are very hard to interpret. If, for some notion of
capacity, the increase in capacity from adding a hidden unit decreases
as more hidden units are added, then we would also expect to see
similar results, even without any optimization failure. How many
hidden units are required to guarantee that there exists a setting of
the weights with zero training error? Why should we expect a net with
15,000 units to be capable of getting arbitrarily low training error
on this dataset? If instead of sigmoid units the net used radial basis
functions, then with a hidden unit for each of the 1.2 million
training cases I would expect the net to be capable of zero
error. Since the data are not pure random noise images, surely fewer
units will be required for zero error, but how many approximately?
Without some evidence that there exists a setting of the weights that
achieve lower error than actually obtained, we can't conclude that the
optimization procedure has failed. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | 5w24FePB4ywro | review | 1,362,373,200,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: Interesting topic. Another potential explanation for the diminishing return is the already good performance of networks with 5k hidden units. It could be that last bit of training performance requires fitting an especially difficult / nonlinear function and thus even 15k units in a single layer MLP can't do it. On such a large training set any reduction is likely statistically significant though, so it might help to zoom in on the plot or give error rates for the 5k and larger networks. Right now I think it's unclear whether the training error asymptotes because that's the best nearly any learning algorithm could do or because the single hidden layer is wasting capacity. More comparisons or analysis can help eliminate the alternate explanation. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | CqF6fhZ9QLCrY | comment | 1,363,311,720,000 | PPZdA2YqSgAq6 | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks to your comment, we have clarified our argument. The main point is not that the training error does not fall beyond a certain point, the main point is that there are *quickly diminishing returns for added number of hidden units* to the point where adding capacity is almost useless. Since measuring VC-dimension is impractical (and not practically relevant here, because we really care about a notion of effective capacity taking into
account the limitation of the optimization algorithm), the notion of 'capacity' that we care about is basically measured by the number of training examples we are able to nail with a given network size and a given budget of training iterations. So in terms of the paper, you have to look at Figure 2, not Figure 1. We have redone Figure 2 to clarify that after 5000 examples, each hidden unit brings less benefit than if it was hardcoded to handle one of the training errors. A fading ROI on the *training error* means that it's harder and harder to make use of the added hidden units, i.e., that the extra capacity brought in by each added hidden unit *decreases* as we consider larger nets. We hypothesize this low ROI on the training error is why people have observed low ROI on the *test* set. That is why we suggest it is worthwhile to investigate methods that will increase the ROI from larger models.
We are not saying that the optimization issue is necessarily due to local minima. We say it could be local minima or ill-conditioning (the two main types of optimization difficulties one can imagine for neural nets).
Regarding the results with infinite number of hidden units and convex training, there is no contradiction: with an infinite number of hidden units (or equivalently, one per training example), you only need to train the output weights, and that is convex. Here, the number of hidden units is still smaller than the number of training examples. The we believe that the optimization difficulty is with training the lower layers.
The learning rate is decreased by 5% each time the training error goes up after an epoch.
We are planning in a second phase of this work to experiment with a wider array of training techniques and architectures to compare their ROI curves, and momentum. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | JqnQqLEIc6q5e | comment | 1,363,644,660,000 | wjvpl_b23glfA | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your suggestion. We didn't plot the cross-entropy it is harder to interpret, but it might be interesting in comparison with the training error curve. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | IyZiWpNTixIVv | comment | 1,363,311,660,000 | 5w24FePB4ywro | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Interesting point, the asymptote in Figure 1 could be explained by the optimization problem becoming more difficult. However, this does not conflict with our argument. We have clarified this in the paper. Our argument relies on Figure 2, which shows the return on investement for adding units. We see that the ROI quickly decreases, even when going from 2000 to 5000 units it decreases an order of magnitude. If the optimization problem did not get harder, we would have expected the ROI to be close to constant, but it seems the optimization becomes harder as more units are added. What's more, beyond 5000 units the ROI falls below the line of 1 error reduced per unit. If there was no optimization problem, the ROI should at least be 1 because the additional unit can be used as a template matcher for one of the training errors. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | wjvpl_b23glfA | comment | 1,363,381,980,000 | IyZiWpNTixIVv | [
"everyone"
] | [
"Marc Shivers"
] | ICLR.cc/2013/conference | 2013 | reply: Have you looked at the decrease in the cross-entropy optimization objective, rather than training error, as a function of number of hidden units? It would be interesting to see a version of Figure 2 that compared the decrease in cross-entropy as you add hidden units with the decrease you would get if your additional hidden units memorized the previously most costly mislabellings. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required. | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | URyDlbBNoEUIn | comment | 1,363,311,600,000 | ChpzCSZ9zqCTR | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: The 3 assumptions can be thought of has 3 conditions that are necessary for the model to be able to fit ImageNet. In traditional experiments this would be true, however, in this case we are only monitoring *training* error. To learn the training set, only one assumption is necessary: no training image has an exact duplicate with a different label. In this case, the model can at least learn a KNN-like function that gives 0 error.
As for more experiments, we are planning experiments starting from the raw images. |
g6Jl6J3aMs6a7 | Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN | [
"Steven R. Young",
"Itamar Arel"
] | This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the spatial input and the current state. The resulting architecture achieves state-of-the-art results on the MNIST classification benchmark. | [
"feature extractor",
"recurrent online clustering",
"destin",
"basic enhancement",
"transition tables",
"temporal features",
"simpler",
"scalable mechanism"
] | https://openreview.net/pdf?id=g6Jl6J3aMs6a7 | https://openreview.net/forum?id=g6Jl6J3aMs6a7 | GGdathbFl15ug | review | 1,362,391,440,000 | g6Jl6J3aMs6a7 | [
"everyone"
] | [
"anonymous reviewer 675f"
] | ICLR.cc/2013/conference | 2013 | title: review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN
review: The paper presents an extension to the author's prior 'DeSTIN' framework for spatio-temporal clustering. The lookup table that was previously used for state transitions is replaced by a feedback, output-to-input loop that somewhat resembles a recurrent neural network. However so little information is provided about the original system that it is difficult to tell if this is an advantage or not. The paper would be a lot clearer and more self-contained if it described and motivated DeSTIN before introducing the new algorithm.
The method is first applied to binary classification with toy sequences. The sequences are not defined, except that the two classes differ only in the first element - making it a memory recall task. The results suggest that the architecture has difficulty retaining information for long periods, with accuracy close to random guessing after 30 timesteps. They also seem to show that the number of centroids controls the underfitting/overfitting of the algorithm.
The paper claims 'state-of-the-art-results on the MNIST classification benchmark'; but the recorded error rate (1.29%) is a long way from the current benchmark (0.23%) - see http://yann.lecun.com/exdb/mnist/. Only 15,000 of the training cases were used, which somewhat mitigates the results. However the statement in the abstract should be changed. The experimental details are very scarce, and I doubt they could be recreated by other researchers. |
g6Jl6J3aMs6a7 | Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN | [
"Steven R. Young",
"Itamar Arel"
] | This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the spatial input and the current state. The resulting architecture achieves state-of-the-art results on the MNIST classification benchmark. | [
"feature extractor",
"recurrent online clustering",
"destin",
"basic enhancement",
"transition tables",
"temporal features",
"simpler",
"scalable mechanism"
] | https://openreview.net/pdf?id=g6Jl6J3aMs6a7 | https://openreview.net/forum?id=g6Jl6J3aMs6a7 | 8BGL8F0WLpBcE | review | 1,362,163,920,000 | g6Jl6J3aMs6a7 | [
"everyone"
] | [
"anonymous reviewer 6b68"
] | ICLR.cc/2013/conference | 2013 | title: review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN
review: Improves the DeSTIN architecture by the same authors.
They write on MNIST:
A classification accuracy of 98.71% was achieved which is comparable to results using the first-generation DeSTIN architecture [1] and to results achieved with other state-of-the-art methods [4, 5, 6].
However, the error rate of the state-of-the-art method on MNIST is actually five times better: 99.77% (Ciresan et al, CVPR 2012). Please discuss this. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | 24bs4th0sfgwE | review | 1,362,833,520,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer c262"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: The paper addresses the problem of low-dimensional data embedding for visualization purposes via stochastic neighbor embedding, in which Euclidean dissimilarities in the data space are modulated by the Gaussian kernel, and a configuration of points in the low-dimensional embedding space is found such that the new dissimilarities in the embedding space obtained via the Student-t kernel match the original ones as closely as possible in the sense of the KL divergence. While the original algorithm is O(n^2), the authors propose to use a fast multipole technique to reduce complexity to O(nlogn). The idea is original and the reported results are very convincing. I think it is probably one of the first instances in which an FMM technique is used to accelerate local embeddings.
Pros:
1. The idea is simple and is relatively easy to implement. The authors also provide code.
2. The experimental evaluation is large-scale, and the results are very convincing.
Cons:
1. No controllable tradeoff between the embedding error and acceleration.
2. In its current setting, the proposed approach is limited to local similarities only. Can it be extended to other settings in which global similarities are at least as important as the local ones? In other words, is it possible to apply a similar scheme for MDS-type global embedding algorithms? |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | DyHSDHfKmbDPM | review | 1,362,421,080,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: I have experimented with dual-tree variants of my algorithm (which required only trivial changes in the existing code), experimenting with both quadtrees and kd-trees as the underlying tree structures. Perhaps surprisingly, the dual-tree algorithm has approximately the same accuracy-speed trade-off as the Barnes-Hut algorithm (even when redundant dual-tree computations are pruned) irrespective of what tree is used.
I think the main reason for this result is that after computing an interaction between two cells, one still needs to figure out to which points this interaction needs to be added (i.e. which points are in the cell). This set of points can either be obtained using a full search of the tree corresponding to the cell, or by storing a list of children in each node during tree construction. Both these approaches are quite costly, and lead the computational advantages of the dual-tree algorithm to evaporate. (The dual-tree algorithm does provide a very cheap way to estimate the value of the t-SNE cost function though.)
I will add these results in the final paper. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | Dkj3DFf4GZJPh | review | 1,362,177,000,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer d9db"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: Stochastic neighbour embedding (SNE) is a sound, probabilistic method for dimensionality reduction. One of its limitations is that its complexity is O(N^2), where N is the, typically large, number of data points. To surmount this limitation, the this paper proposes computational methods to reduce the computational cost to O(NlogN), while only incurring an O(N) memory cost.
In the SNE variant discussed in this paper, the kernel in high dimensions is Gaussian, while the similarity in low dimensions is governed by a t-distribution. The proposed method consists of two components. First, the exponential decay of Gaussian measures is used to carry out truncation and construct a vantage-point tree for the data in high dimensions. This enables the authors to carry our nearest neighbour search in O(NlogN). The second component addreses the efficient computation of the gradient of SNE. Here, the paper proposes a 2D Barnes-hut algorithm to approximate the gradient in O(NlogN) steps. The Barnes-Hut algorithm is a well known method in N-body simulation, but it has not been used in this context previously to the best of my knowledge.
The paper is very well written. The contribution is correct and sound. Not surprisingly, the experiments show great improvements in computational performance, thus allowing for a good dimensionality reduction technique to become more broadly applicable.
The author ought to be commended for making the code available. He should also be commended for making the limitations of the approach very clear in the concluding remarks, namely that the current version is only for 2D-embeddings and that the method does not offer a way of controlling the error (e.g. via error bounds).
Minor typo in Page 2 last line: to slow should be too slow.
I believe the paper makes a good contribution. However, it has one crucial shortcoming that must be addressed by the author. Specifically, there is a great body of literature on N-body methods for machine learning problems that the author does not seem to be aware of. I think this work should be placed in this context and that appropriate references and comparisons (for which I will point the author to online software) should be included in the final form in this paper. The relevant work includes:
1. All the dual-tree approximations developed by Alex Gray at http://www.fast-lab.org/
In particular note that his methods apply to nearest neighbour search and the type of kernel density estimates required in the computation of the gradient. Dual trees also allow for the use of error bounds. For publications, see e.g.
Gray, Alexander G., and Andrew W. Moore. 'N-Body'problems in statistical learning.' Advances in neural information processing systems (2001): 521-527.
Liu, Ting, Andrew W. Moore, Alexander Gray, and Ke Yang. 'An investigation of practical approximate nearest neighbor algorithms.' Advances in neural information processing systems 17 (2004): 825-832.
2. The multipole methods developed in Ramani Duraiswami lab, including:
Yang, Changjiang, Ramani Duraiswami, Nail A. Gumerov, and Larry Davis. 'Improved fast gauss transform and efficient kernel density estimation.' In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 664-671. IEEE, 2003.
3. The algorithms for fast kernel density estimates from Nando de Freitas' lab. See e.g.,
Mahdaviani, Maryam, Nando de Freitas, Bob Fraser, and Firas Hamze. 'Fast computational methods for visually guided robots.' In IEEE International Conference on Robotics and Automation, vol. 1, p. 138. IEEE; 1999, 2005.
Lang, Dustin, Mike Klaas, and Nando de Freitas. 'Empirical testing of fast kernel density estimation algorithms.' UBC Technical repor 2 (2005).
One of his papers does, in fact, discuss multipole methods for SNE and presents results using the fast Gauss transform:
De Freitas, Nando, Yang Wang, Maryam Mahdaviani, and Dustin Lang. 'Fast Krylov methods for N-body learning.' Advances in neural information processing systems 18 (2006): 251.
The code is available here: http://www.cs.ubc.ca/~awll/nbody_methods.html
4. The cover tree for nearest neighbour search, introduced in:
Beygelzimer, Alina, Sham Kakade, and John Langford. 'Cover trees for nearest neighbor.' In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, vol. 23, p. 97. 2006.
For code, see the Wikipedia entry: http://en.wikipedia.org/wiki/Cover_tree
5. FLANN - Fast Library for Approximate Nearest Neighbors developed by Marius Muja. This is a powerful library of methods including randomized kd-trees and k-means methods for fast nearest neighbour search. It is extremely popular in computer vision. For code and more info see: http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN
You could use this code easily to replace the nearest neighbour search and compare performance.
Finally, there is something very interesting in this paper that is worth studying further. Assume we use an N-body method in the computation of the gradient, which has error bounds. Then, it seems to stand to reason that one ought to use loose bounds in the beginning of the gradient iterations and increase the precision as the algorithm progresses. This could allow for further improvements in computation. Moreover, using theoretical tools for studying the convergence of optimization algorithms, one could possibly address the theoretical analysis of this algorithm. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | TTxAqxZdhgIV0 | review | 1,362,330,660,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: Thanks a bunch for these insightful reviews and for the useful pointers to related work (some of which I was not aware of)!
In preliminary experiments, I compared locality-sensitive hashing and vantage-point trees in the initial nearest-neighbor (in the high-dimensional space). I found vantage-point trees to perform considerably better, which is why I used them in the final implementation. The strong performance I obtained when using metric trees appears to be in line with the results presented by Liu, Moore, Gray & Yang (2004). I agree with the first reviewer that there are many other (approximate) nearest-neighbor algorithms that could be used here instead. I will clarify this in the paper, and include references to relevant related work.
The work by Nando de Freitas's lab on n-body simulations is very interesting indeed. I don't think it can readily be applied to t-SNE though, as it appears to heavily rely on the (improved) fast Gauss transform, i.e. on the assumption that Gaussian kernels are used. To the best of my knowledge, there is no existing work that uses fast multipole methods to evaluate Student-t kernels (the fast Gauss transform is an example of a fast multipole method), so extension of this work to t-SNE appears non-trivial. It is also unclear whether fast multipole methods would actually outperform Barnes-Hut in practice, because multipole methods tend to have constants that are much worse. Having said that, this is indeed a very interesting direction for future work! I will clarify this in the paper, and make sure to include the relevant references.
I was not aware of the work by Alex Gray's lab on dual-tree algorithms for n-body simulations; indeed, this work seems readily applicable to t-SNE. I'm presently coding up a dual-tree version of my algorithm, and will try to include empirical evaluations with the dual-tree approach in the final version of the paper. I hope to post an updated version of the paper with these results on Arxiv in a week or two.
I agree with the first reviewer that it is interesting to study if the accuracy-speed trade-off can be adapted during the optimization, but I am not sure that I agree that looser bounds should be used in the beginning of the optimization. In fact, the first 100 or so iterations are essential in identifying the global structure of the data --- doing a poor job in those iterations often implies getting stuck in poor local optima. (I guess one can think of it as errors propagating over time in the optimization.) So an optimal strategy may actually be the opposite of what the reviewer suggests: use tight bounds in the early stages of the optimization and looser bounds later on. It's certainly an interesting direction for future work! |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | 2VfI2cAZSF2P0 | review | 1,362,192,420,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer 7db1"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: The submitted paper proposes a more efficient implementation of the Student-t distributed version of SNE. t-SNE is O(n^2), and the proposed implementation is O(nlogn). This offers a substantial improvement in the efficiency, such that very large datasets may be embedded. Furthermore, the speed increase is obtained through 2 key approximations without incurring a penalty on accuracy of the embedding.
There are 2 approximations that are described. First, the input space nearest neighbors are approximated by building a vantage-point tree. Second, the approximation of the gradient of KL divergence is made by splitting the gradient into attractive and repulsive components and applying a Barnes-Hut algorithm to estimate the repulsive component. The Barnes Hut algorithm uses a hierarchical estimate of force. A quad-tree provides an efficient, hierarchical spatial representation.
The submission is well-written and seems to be accurate. The results validate the claim: the error of the embedding does not increase, and the computation time is decreased by an order of magnitude. The approach is tested on MNIST, NORB, TIMIT, and CIFAR. Overall, the contribution of the paper is fairly small, but the benefit is real, given the popularity of SNE. In addition, the topic is relevant for the ICLR audience. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | pA91py2CW8AQg | review | 1,362,758,580,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: I updated the paper according the reviewers' comments, and included results with a dual-tree implementation of t-SNE in the appendix. The updated paper should appear on Arxiv soon. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | Hy8wy4X01CHmD | review | 1,363,113,120,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: In typical applications of Barnes-Hut (like t-SNE), the force nearly vanishes in the far field, which allows for averaging those far-field forces without losing much accuracy.
In algorithms that minimize, e.g., the squared error between two sets of pairwise distances, I guess you could do the opposite. The force exerted on a point is then dominated by interactions with distant points, so you should be able to average over the interactions with nearby points without losing much accuracy. However, it's questionable whether such an approach would be as efficient because, in general, a point has far fewer points in its near field than in its far field (i.e. far fewer points for which we can average without losing accuracy).
Having said that, I have never tried, so I could be wrong. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | AZcnMdQBqGZS4 | review | 1,362,833,640,000 | eQWJec0ursynH | [
"everyone"
] | [
"Alex Bronstein"
] | ICLR.cc/2013/conference | 2013 | review: Laurens, have you thought about using similar ideas for embedding algorithms that also exploit global similarities (like multidimensional scaling)? I think in many types of data analysis, this can be extremely important. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects. | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | H3-iUVuyZzUgh | review | 1,365,114,600,000 | eQWJec0ursynH | [
"everyone"
] | [
"Zhirong Yang"
] | ICLR.cc/2013/conference | 2013 | review: Great work, congratulations! It seems we and you have simultaneously found essentially the same solution. Our paper and software are here:
Zhirong Yang, Jaakko Peltonen, Samuel Kaski. Scalable Optimization of Neighbor Embedding for Visualization. Accepted to ICML2013.
Preprint and software: http://research.ics.aalto.fi/mi/software/ne/
Best regards,
Zhirong, Jaakko, Samuel |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference. | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | QQ1JEKYFTIQhj | review | 1,362,262,200,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer 4490"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: This paper proposes a scheme for utilizing LNP model neurons to perform inference in Boltzmann Machines. The contribution of the work is to map a Boltzmann Machine network onto a set of LNP model units and to demonstrate inference in this model.
The idea of using neural spiking models to represent probabilistic inference is not new (see refs. at end). The primary contribution of this work is to take a learned deep Boltzmann machine from the literature, and to implement this network using LNP neurons, with the necessary modifications. Therefore, the contribution is specific to the deep Boltzmann machine architecture. The existing work in the literature often takes a different approach: taking realistic neural models and asking how these models can represent variables probabilistically.
Pros:
Developing mappings between machine learning algorithms and neural responses is an important direction.
To my knowledge, the implementation of a deep-BM with spiking neurons is novel.
Cons:
The clarity of the text and presentation of the mathematics needs improvement.
The resulting model suffers from some non-biological phenomenology.
The empirical results are not very compelling.
I would liked to have seen a comparison to the existing approaches for using spiking neurons to implement inference. Particularly: [2-5]. Is there not a mapping from those models to the deep BM? Why is the proposed mapping necessary, or what are the limitations of those existing proposals for a deep BM?
Other comments:
The paper provides a lengthy introduction to LNP and inference. I would encourage the author to justify the various details that are introduced, and those that are not directly relevant for the proposed network, should be left out. In general, the exposition needs clarification.
The proposed network seems like a logical series of steps, but the end result leads to a biologically implausible network, (at least when considering known properties of cortex). I think a broad approach might be warranted for this problem. For example, starting from the LNP model and using this model as an element in a Boltzmann machine.
A related note: Isn't it just more plausible to estimate a deep-network with positive only weights? (to deal with Dale's law) There is likely some work to be done there, but it seems this direction wouldn't require the paired neurons you have here. Or a network with realistic excitatory-inhibitory ratio?
Why not start with a Poisson-unit Boltzmann machine, and examine its properties? see (Welling et al. 2005)
I found the empirical evaluation to be weak. I don't understand how running the network is a demonstration of correct inference. Wouldn't we expect each of these networks to diverge and sample different parts of the posterior?
The statistics in Figure 5 need more justification. I did not understand why these are relevant, or what degree of variability should be acceptable.
There are, of course, a variety of biological issues that seem to be incongruent with the proposal.
In cortex the distribution of excitatory to inhibitory neurons is 4:1. The current proposal seems to require 1:1.
The pairing of neuron weights seems unlikely, but maybe this could be solved through learning?
What about mean firing rates?, are these consistent between model and cortical responses?
The title is a little misleading. I might suggest something more like:
Networks of LNP neurons are capable of performing Bayesian inference on Boltzmann machines.
The work of Shi and Griffiths 2009 seems highly relevant, and addresses some of the questions posed by the author.
Note that Dale's law is not generally applicable, but I am not sure about any refuttaion in cortex, which I assume is where you would imagine the deep network. (see co-transmission)
[1] Welling, M., Rosen-Zvi, M., and Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. Advances in Neural Information Processing Systems 17, pages 1481-1488. MIT Press, Cambridge, MA.
[2] Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian inference by importance sampling. Advances in Neural Information Processing Systems 22
[3] Ma WJ, Beck JM, Pouget A (2008) Spiking networks for Bayesian inference and choice. Current Opinion in Neurobiology 18, 217-22.
[4] Pecevski D, Buesing L, Maass W (2011) Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons. PLoS Comput Biol 7(12): e1002294
[5] József Fiser, Pietro Berkes, Gergő Orbán, Máté Lengyel. Statistically optimal perception and learning: from behavior to neural representations', Trends Cogn Sci. 2010. 14(3):119-130 |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference. | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | B4qSE6NM3ZEOV | review | 1,362,383,640,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"Yuanlong Shao"
] | ICLR.cc/2013/conference | 2013 | review: Thank you very much for the valuable reviews and references! I learned quite a lot from reading the suggested papers.
--> For Reviewer caa8:
- Regarding the question raised in the end of your review, I think a somewhat related question is why neurons use spikes and whether we shall follow that in our computational model. I was previously approaching this question by exploring whether the combined semi-stochastic algorithm does something similar to simulated annealing, resulting in better local optima estimated in variational inference. But I failed to show this kind of superiority. In all the experiments I did, pure variational inference performs better than semi-stochastic inference in the context of classification with DBM on MNIST (probably because the DBM is well learned and the posteriors are well shaped with only one significant mode, also see the next paragraph).
Another possibility of answering this is that, according to neural coding literature, such as works from Aurel Lazar, spikes are efficient ways of encoding time-varying real-valued functions if they are band limited (smooth in some sense). So neurons may be constrained by the energy they can use and have to choose the spiking approach. If that is the case, the 'randomness' of LNP is not that important, what's important of spikes are their property of reconstructing the function. And indeed more realistic neuron models such as Leaky Integrate-and-Fire (LIF) and Hodgkin-Huxley (HH) are not random (Chapter 5 of <6> provide a review about randomness vs. chaotic property of neurons). Are the spikes generated only to meet the reconstruction requirement? I think this worth further investigation. What I do found after submitting this paper is that, in the classification experiments I just mentioned, LIF networks work much better than LNP networks because the converged spiking pattern in LIF is periodic and the activation we get from convolving the recent spiking history is more stable, but if I replace the pseudo-random number generator in LNP by a quasi-random number generator (an effort to get more evenly distributed random samples), LNP and LIF behaves similarly. Another finding is that, in my recent GPU-cluster implementation of stochastic networks, the only way I can balance the data transmission with the computation is by transmitting packed binary spikes among computation nodes. So maybe LIF and HH are merely certain kinds of quasi-random generators used to do variational inference in an economic way.
--> For Reviewer 4490
- I think the Poisson-unit BM direction is interesting, and I will definitely explore this in the future. Thanks for the review. In the supplemental material, I have a detailed justification about in what sense the discrete time Bernoulli model can be considered as approximations to the continuous time Poisson model. So for this paper, I think the foundation is OK.
- I agree to your comments that this paper is specific to the deep Boltzmann machine architecture. I didn't rule out the possibility that neurons can represent other models. But as long as they can represent Boltzmann machines with hidden variables, then the representation power is promised. As a compact universal approximator, even if behavior experiments reveals different types of probability models, they are not necessarily conflicting and may still be implemented by DBM. In addition, what's in my paper is extendable to high-order Boltzmann machines, as long as convolution with D function in section 2 of my paper can be eliminated (which I already validated on LIF and will put in my future publications).
- As to whether it is enough to have positive weights only, I'm not sure. If the weights are symmetric, then the question is whether a Boltzmann machine with positive weights only is still universal for knowledge representation. I believe the answer is yes, because if yes, then the brain could be doing learning by using positive weights only in most of the normal time but add additional negative weights on demand when serious mistakes are made which need to be revised effectively. This way positive weights will be dominating but not exclusive. So yes I also think this issue needs to be delayed when we deal with neurally plausible learning. If Boltzmann machine with positive only weights and a constant bias is still universal, the original Boltzmann machine without the softmax manipulation in my paper could suffice for LNP modeling.
- For whether the network will diverge to other parts of the posterior, I think this is very probable, since variational inference is a local optimum algorithm, and by my justification in the supplemental material, the semi-stochastic inference algorithm also approaches the local optima of variational inference. The good news is that by <3>, if a model is well learned, variational inference is good in the sense that the variational lower bound is close to the true likelihood, we can interpreting this result as the true posterior is highly single-moded. This alleviates the problem of local optima.
- For low mean firing rate, I already considered this. A broader question is that if the nonlinear activation is not sigmoid (such as the Figure 1 in my paper), or if the maximum output of the activation function is low, can we still consider the LNP as a inference procedure. My answer is yes. In deriving the variational inference, we obtain the sigmoid activation from differentiating the KL-divergence loss function. If the activation is not sigmoid, we can easily reverse this derivation to obtain the new loss function, and by taking a difference to the original loss function, we get a regularization term. With different ways of fitting LNP to a Boltzmann machine, the regularization term can be different. Most of the time such regularization term is such that they favors low activity, which is reasonable. In this way, a different activation function can be regarded as a regularized variational inference. I can put more about this issue in the next revision if the paper is accepted.
--> In the following I put a brief discussion of the reference papers provided by the reviewers. Please let me know if I made any mistake on my interpretation of these papers, as I read them in a hurry.
- József Fiser, Pietro Berkes, Gergő Orbán, Máté Lengyel. Statistically optimal perception and learning: from behavior to neural representations', Trends Cogn Sci. 2010. 14(3):119-130
This paper previously inspired me a lot. Although there is no specific computational models proposed in this paper, most of the schemes which they discussed are what I'm following in my paper. For example, (1) they mentioned that sampling-based approaches, compared to parameter-based approach, has direct option for learning, but they lack the basis for experimental test. My work connects abstract computational models to LNP spiking neurons, allowing direct test (my on-going work right now extends this to Leaky Integrate-and-Fire and Hodgkin-Huxley as well). (2) They also mentioned that parameter-based approaches may suffer from the exponential explosion in the required size of neurons, while connectionist models such as Boltzmann machine rely on distributed coding which don't suffer from this problem. Furthermore, Boltzmann machines may be compact universal approximators. In this sense, modeling knowledge representation in terms of Boltzmann machines would be safe as long as the learnability issue can be addressed. (3) They also mentioned the 'spontaneous activity'. One of the issues when interpreting spikes as samples is that Monte-Carlo methods such as Gibbs sampling does not converge to the right distribution when all neurons sample together in parallel, while my approach provides an alternative viewpoint as a stochastic approximation of variational inference. The variational distribution one can get can be considered as an approximation of a mode of the joint posterior <4>, thus the activities in different neurons will be correlated according where the mode is located, this leads to an explanation of 'spontaneous activity'. (4) They also mentioned that inference and learning should be considered together when talking about representation. I didn't have it in my current paper, but my on-going work, built on this model, relates STDP to backpropagation, and by <1>, error-driven learning is hopeful to yield consistent probabilistic models with a properly chosen learning scheme. I will make these works available once the learning rule is tested in actual learning tasks. Also, if one favors likelihood-based learning such as contrastive divergence, the description on how hippocampus works in <2> implies that the positive phase of contrastive divergence, if implemented by variational inference as in <3>, can be preserved as short-term memory in the brain. The negative phase of contrastive divergence may be implemented by dreaming <5> (these early works about unlearning/reverse learning are about Hopfield Networks. But HN is a thresholded version of variational inference in Boltzmann machines with hidden variables, thus in terms of representation and learning, they are highly related to each other).
- Reichert, D. Deep Boltzmann Machines as Hierarchical Generative Models of Perceptual Inference in the Cortex. Ph.D. Thesis. 2012.
For now as what I understand, I think this paper is more of an 'analogical model', the clues they use to relate the stochastic property of spiking to machine learning is through Gibbs sampling, which has the issue I mentioned above, while my focus is more on interpreting neural spiking as variational inference.
- Shi, L., & Griffiths, T. L. (2009). Neural implementation of hierarchical Bayesian inference by importance sampling. Advances in Neural Information Processing Systems 22
To my understanding, this paper is about the implementation of importance sampler via one-hidden layer feedforward neural networks and then use it as building block to construct a hierarchical model for both top-down generative and bottom-up inference procedure. Thus if this is neurally plausible, it stands for other things that biological neural networks can do, which does not conflict to my work that neurons can perform approximate inference on DBM. The two lines of research can be separately proceeded.
- Ma WJ, Beck JM, Pouget A (2008) Spiking networks for Bayesian inference and choice. Current Opinion in Neurobiology 18, 217-22.
This paper is about the Probabilistic Population Code, which is another alternative for how neurons represent probability, belonging to the parameter-based approach discussed in the Fiser etc. 2010 paper above.
- Pecevski D, Buesing L, Maass W (2011) Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons. PLoS Comput Biol 7(12): e1002294
This paper is most interesting to me for now. I cannot comment on it before I read it carefully. I will come back with another post a bit later.
Reference List:
<1> Joshua V. Dillon, Guy Lebanon. Stochastic Composite Likelihood. Journal of Machine Learning Research 11 (2010) 2597-2633.
<2> O'Reilly, R. C., Bhattacharyya, R., Howard, M. D., & Ketz, N. (2011). Complementary learning systems. Cognitive Science
<3> Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. Artificial Intelligence, 5(2):448C455, 2009.
<4> Thomas Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005.
<5> Francis Crick and Graeme Mitchison, The function of dream sleep, Nature 304, 111 - 114 (14 July 1983); doi:10.1038/304111a0.
<6> Wulfram Gerstner and Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 1 edition, 2002. |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference. | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | 1JfiMxWFQy15Z | review | 1,361,988,540,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer caa8"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: The paper provides an explicit connection between the linear-nonlinear-poisson (LNP) model of biological neural networks and the Boltzmann machine. The author proposes a semi-stochastic inference procedure on Boltzmann machines, with some tweaks, that can be considered equivalent to the inference of an LNP model.
Author's contributions:
(1) Starting from the LNP neuron model the author, in detail, derives one (Eq. 5) that closely resembles a single unit in a Boltzmann machine.
(2) A semi-stochastic inference (Eq. 10) for a Boltzmann machine that combines Gibbs sampling and variational inference is introduced.
(3) Several tweaks (Sec. 4) are proposed to the semi-stochastic inference (Eq. 10) to mimic Eq.5 as closely as possible.
Pros)
As I am not an expert in biological neurons and their modeling, it is difficult for me to assess the novelty fully. Though, it is interesting enough to see that the inference in the biological neuronal network (based on the LNP model) corresponds to that in Boltzmann machines. Despite my lack of prior work and details in biological neuronal models, the reasoning seems highly detailed and understandable. I believe that not many work has explicitly shown the direct connection between them, at least, not on the level of a single neuron (Though, in a high level of abstraction, Reichert (2012) used a DBM as a biological model).
Cons)
If I understood correctly, unlike what the title seems to claim, the network consisting of LNP neurons do 'not' perform the exact inference on the corresponding Boltzmann machine. Rather, one possible approximate inference (the semi-stochastic inference, in this paper) on the Boltzmann machine corresponds to the LNP neural network (again, in a form presented by the author).
I can't seem to understand how the proposed inference, which essentially samples from the variational posterior and use them to compute the variational parameters, differs much from the original variational inference, except for that the proposed method adds random noise in estimating variational params. Well, perhaps, it doesn't really matter much since the point of introducing the new inferernce scheme was to find the correspondance between the LNP and Boltzmann machine.
= References =
Reichert, D. Deep Boltzmann Machines as Hierarchical Generative Models of Perceptual Inference in the Cortex. Ph.D. Thesis. 2012. |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learning. Recent researches emphasized the biological plausibility of Linear-Nonlinear-Poisson (LNP) neuron model. We show that with neurally plausible settings, the whole network is capable of representing any Boltzmann machine and performing a semi-stochastic Bayesian inference algorithm lying between Gibbs sampling and variational inference. | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | 88txIZ2gY7lJh | review | 1,362,392,700,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer ef61"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: This paper argues that inference in Boltzmann machines can be performed using neurons modelled according to the Linear Nonlinear-Poisson model. The LNP model is first presented, then one variant of inference procedure for Boltzmann machine is introduced and a section shows that LNP neurons can implement it. Experiments show that the inference procedure can produce reconstructions of handwritten digits.
Pros: the LNP model is presented at length and LNP neurons can indeed perform the operations needed for inference in the Boltzmann machine model.
Cons: the issue of learning the network itself is not tackled here at all.
While the mapping between the LNP model and the inference process in the machine is particularly detailed here, I did not find this particularly illuminating, given that restricted Boltzmann machines were designed with a simple inference procedure with only very simple operations.
I find this paper provides too little new insight to warrant acceptance at the conference. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | lvwFsD4fResyH | review | 1,361,921,040,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"anonymous reviewer 1dcf"
] | ICLR.cc/2013/conference | 2013 | title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
review: Summary:
This paper proposes learning a pooling layer (not necessarily of a convolutional network) by using temporal coherence to learn the pools. Training is accomplished by minimizing a criterion that encourages the features to change slowly but have high entropy over all.
Detailed comments:
-The method demonstrates improvement over a spatial pooling baseline
-The experiments here don't allow comparison to prior work on learning pools, such as the paper by Jia and Huang.
- The method is not competitive with the state of the art
Suggestions to authors:
In future revisions of this paper, please be more specific about what your source of natural videos was. Just saying vimeo.com is not very specific. vimeo.com has a lot of videos. How many did you use? Do they include the same kinds of objects as you need to classify on CIFAR-10?
Comparing to Jia and Huang is very important, since they also study learning pooling structure. Note that there are also new papers at ICLR on learning pooling structure you should consider in the future. I think Y-Lan Boureau also wrote a paper on learning pools that might be relevant.
Pros:
-The method demonstrates some improvement over baseline pooling systems applied to the same task.
Cons:
-Doesn't compare to prior work on learning pools
-The method isn't competitive with the state of the art, despite having access to extra training data. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | agstF_wXReF7S | review | 1,362,276,780,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: Interesting paper.
You might be interested in this paper by Karol Gregor and myself: http://arxiv.org/abs/1006.0448
The second part of the paper also describes a kind of pooling based on temporal constancy. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from image sequences. It is trained to improve the temporal coherence of features, while keeping the information loss at minimum. Our method does not use spatial information, so it can be used with non-convolutional models too. Experiments on images extracted from natural videos showed that our method can cluster similar features together. When trained by convolutional features, auto-pooling outperformed traditional spatial pooling on an image classification task, even though it does not use the spatial topology of features. | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | 7N2E7oCO6yPiH | review | 1,362,203,160,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"anonymous reviewer 2c2a"
] | ICLR.cc/2013/conference | 2013 | title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
review: Many vision algorithms comprise a pooling step, which combines the outputs of a feature extraction layer to create invariance or reduce dimensionality, often by taking their average. This paper proposes to refine this pooling step by 1) not restricting pooling to merely spatial dimensions (so that several different features can be combined), and 2) learning it instead of deciding the structure of the pools beforehand.
This is achieved by replacing the pooling step by a linear transformation of the outputs of the feature extractor (here, an autoencoder), with the constraint that all weights be nonnegative. The main intuition for training is that an invariant representation should not change much between two neighboring frames in a video. Thus, training is conducted by minimizing a cost function that combines a reconstruction error cost and a frame-to-frame dissimilarity cost: the reconstruction error cost ensures that the representation before pooling can be reconstructed from the pooled output without too much discrepancy, and the dissimilarity cost encourages two neighboring frames in a video to have similar pooled representation.
Two experiments are provided: the first one shows that training on patches from natural videos yields pool that combine similar features; the second one tests the algorithm on CIFAR10 and shows that the scheme proposed here performs better than spatial pooling.
Learning pooling weights instead of pre-selecting them is appealing, however this work does not demonstrate the value of the advocated approach.
First, the context given is insufficient; much previous work has explored how to combine different feature maps across feature types rather than only across space, with good results; some of this work is cited here (e.g., ref. 5, Hyvärinen, Hoyer, Inki 2001, ref. 7 Kavukcuoglu et al. 2009), but only briefly mentioned and dismissed because (1) the clusters are required to be fixed manually, (2) clusters are required to have the same size (I am not sure why this paper mentions that, this is not true -- the clusters do have the same size in these papers but it is not a requirement), and (3) there is no 'guarantee that the optimal feature clustering can be mapped into two-dimensional space'. This is true, but the two-dimensional mapping into a topographical map is a bonus, and the same cost functions could be applied with no overlap between the pools, as in the approach advocated here, and still obtain pools that group similar features.
In any case, it is not sufficient to merely state the shortcomings of these previous approaches, without showing that the method here outperforms them and that these supposed shortcomings truly hurt performance.
Another line of work that should definitely be introduced (and isn't), is work enforcing similarity of representations for similar images to train coding. There has been much work on this, even also using video,
e.g. Mobahi, Collobert, Weston, Deep Learning from Temporal Coherence in Video, ICML 2009 -- or before that with collections of still images with continuously varying parameters, Hadsell, Chopra and LeCun, Dimensionality Reduction by Learning an Invariant Mapping (CVPR 2006), and much other work. Those older works use similarity-based losses to train encoding features rather than pooling, but this is not a real difference, which is my second point:
Second, comparing the pooling step here to a simple spatial pooling step is somewhat misleading; the 'auto-pooling step' in this paper is a full-fledged linear mapping, with the added restriction that the weights have to be nonnegative. Thus the system is more akin to a two-layer encoding network than a single-layer network. The distinction between 'coding' and 'pooling' is an artificial one anyways; given that auto-pooling has as many parameters are a standard coding step, it should not only be compared to the much simpler spatial pooling.
In terms of performance, the performance on Cifar 10 is much below what can be obtained with a single layer of features (e.g. compare the 69.7% here to results between 68.6% and 79.6% in Coates et al.'s 'An Analysis of Single-Layer Networks in Unsupervised Feature Learning', and better performance in subsequent papers by Coates et al.), so this is indeed not very convincing.
The ideas combined here (learning a pooling map, using similarity in neighboring frames,
Pros/cons:
- pros: ideas for generalizing pooling are intuitive and appealing
- cons: many of these ideas have been explored elsewhere before, and this paper does not do a suitable job of delineating what the specific contribution is. In fact, it seems that the proposed approach does not have much novelty and most ideas here are already part of existing algorithms; experimental results fail to demonstrate the superiority of the proposed scheme. |