Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
8,814
Finally it's better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get. [experiments-NEU], [IMP-NEU]
experiments
null
null
null
null
null
IMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,817
Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the 'right' way to go about this versus other choices that might be made.[details-NEU], [EMP-NEU]
details
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,818
The final threshold for 'successful' speedups feels somewhat arbitrary -- why 16ms in particular? [null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,819
In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper.[contribution-NEU], [EMP-NEU]
contribution
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,820
While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results.[idea-NEU], [EMP-NEU]
idea
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,821
Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,822
Under such an approach, the unedited string receives probability one, which doesn't leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. [approach-NEU], [EMP-NEU]
approach
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,823
Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,824
For example, the probability p_1 of 'pkoe' versus p_2 of 'zoze' as likely versions of 'poke' (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model.[model-NEG], [EMP-NEG]
model
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,825
Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting.[models-NEU], [NOV-NEU]
models
null
null
null
null
null
NOV
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,826
And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model.[model-NEU], [EMP-NEU]
model
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,827
It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume.[model-NEG], [EMP-NEG]
model
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,828
That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. [shortcomings-NEG], [APR-NEG]
shortcomings
null
null
null
null
null
APR
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,829
As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious.[contribution-NEU, shortcoming-NEG], [EMP-NEG]
contribution
shortcoming
null
null
null
null
EMP
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
8,830
Some other comments: Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes.[presentation-NEG, section-NEG], [PNF-NEG, EMP-NEG]
presentation
section
null
null
null
null
PNF
EMP
null
null
null
NEG
NEG
null
null
null
null
NEG
NEG
null
null
null
8,831
In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn't match your prior notation, or, frankly, the way in which the experimental results are described.[experimental results-NEG], [EMP-NEG]
experimental results
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,832
The notation that you use is a bit sloppy and not everything is introduced in a clear way.[notation-NEG], [PNF-NEG, CLA-NEG]
notation
null
null
null
null
null
PNF
CLA
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
8,833
For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3).[notation-NEG], [CLA-NEG]
notation
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,834
Also, you claim that s_0 is the empty string, but isn't it more correct to model this symbol as the beginning of string symbol?[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,835
If not, what is the difference between s_0:m and s_1:m?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,836
If s_0 is start of string, the s_0:m is of length m+1 not length m.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,838
(you don't need them, but also why number if you never refer to them later?[null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,839
) Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn't require to make that point).[algorithm-NEG], [SUB-NEG]
algorithm
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,840
Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,841
This doesn't strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn't really correspond to true 'unseen' scenarios.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,843
You never explicitly mention what your training loss is in section 5.1.[section-NEG], [CLA-NEG]
section
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,844
Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods.[problem-POS, main contribution-NEG], [EMP-NEG]
problem
main contribution
null
null
null
null
EMP
null
null
null
null
POS
NEG
null
null
null
null
NEG
null
null
null
null
8,850
The paper is well explained, and it's also nice that the runtime is shown for each of the algorithm blocks.[paper-POS], [CLA-POS, EMP-POS]
paper
null
null
null
null
null
CLA
EMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
8,851
Could imagine this work giving nice guidelines for others who also want to run query completion using neural networks.[work-POS], [IMP-POS]
work
null
null
null
null
null
IMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,852
The final dataset is also a good size (36M search queries).[dataset-POS], [SUB-POS]
dataset
null
null
null
null
null
SUB
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,853
My major concerns are perhaps the fit of the paper for ICLR as well as the thoroughness of the final experiments.[experiments-NEU], [APR-NEU]
experiments
null
null
null
null
null
APR
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,854
Much of the paper provides background on LSTMs and edit distance, which granted, are helpful for explaining the ideas.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
8,855
But much of the realtime completion section is also standard practice, e.g. maintaining previous hidden states and grouping together the different gates.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,856
So the paper feels directed to an audience with less background in neural net LMs.[null], [IMP-NEG]
null
null
null
null
null
null
IMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,857
Secondly, the experiments could have more thorough/stronger baselines.[experiments-NEU, baselines-NEU], [EMP-NEU, CMP-NEU]
experiments
baselines
null
null
null
null
EMP
CMP
null
null
null
NEU
NEU
null
null
null
null
NEU
NEU
null
null
null
8,858
I don't really see why we would try stochastic search. And expected to see more analysis of how performance was impacted as the number of errors increased, even if errors were introduced artificially, and expected analysis of how different systems scale with varying amounts of data.[analysis-NEU], [EMP-NEG]
analysis
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
8,859
The fact that 256 hidden dimension worked best while 512 overfit was also surprising, as character language models on datasets such as Penn Treebank with only 1 million words use hidden states far larger than that for 2 layers.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,864
The experiments show robustness to these types of noise.[experiments-POS], [EMP-NEU]
experiments
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
NEU
null
null
null
null
8,865
Review: The claim made by the paper is overly general, and in my own experience incorrect when considering real-world-noise.[claim-NEG], [EMP-NEG]
claim
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,866
This is supported by the literature on data cleaning (partially by the authors), a procedure which is widely acknowledged as critical for good object recognition.[literature-NEU, procedure-NEU], [EMP-NEU]
literature
procedure
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
8,867
While it is true that some image-independent label noise can be alleviated in some datasets, incorrect labels in real world datasets can substantially harm classification accuracy.[datasets-NEU, accuracy-NEU], [EMP-NEG]
datasets
accuracy
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEG
null
null
null
null
8,868
It would be interesting to understand the source of the difference between the results in this paper and the more common results (where label noise damages recognition quality).[results-NEU], [EMP-NEU, CMP-NEU]
results
null
null
null
null
null
EMP
CMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
8,869
The paper did not get a chance to test these differences, and I can only raise a few hypotheses.[paper-NEG], [CMP-NEG]
paper
null
null
null
null
null
CMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,870
First, real-world noise depends on the image and classes in a more structured way. For instance, raters may confuse one bird species from a similar one, when the bird is photographed from a particular angle.[null], [CLA-NEG]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,872
Another possible reason is that classes in MNIST and CIFAR10 are already very distinctive, so are more robust to noise.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
8,873
Once again, it would be interesting for the paper to study why they achieve robustness to noise while the effect does not hold in general.[paper-NEU], [SUB-NEU]
paper
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,874
Without such an analysis, I feel the paper should not be accepted to ICLR because the way it states its claim may mislead readers.[analysis-NEG, paper-NEG], [SUB-NEG, APR-NEG]
analysis
paper
null
null
null
null
SUB
APR
null
null
null
NEG
NEG
null
null
null
null
NEG
NEG
null
null
null
8,875
Other specific comments: -- Section 3.4 the experimental setup, should clearly state details of the optimization, architecture and hyper parameter search.[Section-NEG, architecture-NEU], [EMP-NEU, CLA-NEG]
Section
architecture
null
null
null
null
EMP
CLA
null
null
null
NEG
NEU
null
null
null
null
NEU
NEG
null
null
null
8,876
For example, for Conv4, how many channels at each layer?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,877
how was the net initialized? [null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,878
which hyper parameters were tuned and with which values?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,879
were hyper parameters tuned on a separate validation set?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,880
How was the train/val/test split done, etc.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,882
-- Section 4, importance of large datasets.[Section-NEU], [EMP-POS]
Section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
POS
null
null
null
null
8,883
The recent paper by Chen et al (2017) would be relevant here.[null], [SUB-NEU]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,884
-- Figure 8 failed to show for me.[Figure-NEG], [PNF-NEG]
Figure
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,885
-- Figure 9,10, need to specify which noise model was used. [Figure-NEG, model-NEU], [EMP-NEG]
Figure
model
null
null
null
null
EMP
null
null
null
null
NEG
NEU
null
null
null
null
NEG
null
null
null
null
8,890
Naive multitask learning with deep neural networks fails in many practical cases, as covered in the paper. [paper-NEU], [EMP-NEG]
paper
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
8,891
The one concern I have is perhaps the choice of distinct of Atari games to multitask learn may be almost adversarial, since naive multitask learning struggles in this case; but in practice, the observed interference can appear even with less visually diverse inputs.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,892
Although performance is still reduced compared to single task learning in some cases,[performance-NEG], [EMP-NEG]
performance
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,893
this paper delivers an important reference point for future work towards achieving generalist agents, which master diverse tasks and represent complementary behaviours compactly at scale.[reference-POS, future work-POS], [IMP-POS]
reference
future work
null
null
null
null
IMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
8,894
I wonder how efficient the approach would be on DM lab tasks, which have much more similar visual inputs, but optimal behaviours are still distinct. [approach-NEU], [IMP-NEU]
approach
null
null
null
null
null
IMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,900
** REVIEW SUMMARY ** The paper reads well, has sufficient reference.[paper-POS], [CLA-POS]
paper
null
null
null
null
null
CLA
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,901
The idea is simple and well explained.[idea-POS], [EMP-POS]
idea
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,902
Positive empirial results support the proposed regularizer.[empirial results-POS], [EMP-POS]
empirial results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,905
In related work, I would cite co-training approaches.[related work-NEU], [CMP-NEU, SUB-NEU]
related work
null
null
null
null
null
CMP
SUB
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
8,906
In effect, you have two view of a point in time, its past and its future and you force these two views to agree, see (Blum and Mitchell, 1998) or Xu, Chang, Dacheng Tao, and Chao Xu.[null], [CMP-NEU]
null
null
null
null
null
null
CMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,907
A survey on multi-view learning. arXiv preprint arXiv:1304.5634 (2013).[null], [CMP-NEU]
null
null
null
null
null
null
CMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,908
I would also relate your work to distillation/model compression which tries to get one network to behave like another. On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained?[work-NEU], [CMP-NEU, EMP-NEU]
work
null
null
null
null
null
CMP
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
8,909
In section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix.[section-NEU], [EMP-NEG]
section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
8,910
I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,911
Could you comment why this would not happen?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,912
In Section 4.2, you need to refer to Table 2 in the text.[Section-NEU, Table-NEU, text-NEU], [PNF-NEU]
Section
Table
text
null
null
null
PNF
null
null
null
null
NEU
NEU
NEU
null
null
null
NEU
null
null
null
null
8,913
You also need to define the evaluation metrics used.[evaluation metrics-NEU], [EMP-NEU]
evaluation metrics
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,914
In this section, why are you not reporting the results from the original Show&Tell paper?[section-NEU], [EMP-NEU]
section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,915
How does your implementation compare to the original work?[implementation-NEU], [CMP-NEU]
implementation
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,916
On unconditional generation, your hypothesis on uncertainty is interesting and could be tested.[hypothesis-POS], [EMP-POS]
hypothesis
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,917
You could inject uncertainty in the captioning task for instance, e.g. consider that multiple version of each word e.g. dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,918
Would your regularizer still be helpful there?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,919
At which point would it break?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,923
I think the fact that the authors demonstrate the viability of training VDFFNWSCs that could have, in principle, arbitrary nonlinearities and normalization layers, is somewhat valuable and as such I would generally be inclined towards acceptance,[acceptance-POS], [REC-POS]
acceptance
null
null
null
null
null
REC
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,924
even though the potential impact of this paper is limited because the training strategy proposed is (by deep learning standards) relatively complicated, requires tuning two additional hyperparameters in the initial value of lambda as well as the step size for updating lambda, and seems to have no significant advantage over just using skip connections throughout training.[potential impact-NEG, strategy-NEG], [IMP-NEG]
potential impact
strategy
null
null
null
null
IMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
8,925
So my rating based on the message of the paper would be 6/10. [rating-NEU], [REC-NEU]
rating
null
null
null
null
null
REC
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,927
As long as those issues remain unresolved, my rating is at is but if those issues were resolved it could go up to a 6.[rating-NEU], [REC-NEU]
rating
null
null
null
null
null
REC
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,928
+++ Section 3.1 problems +++ - I think the toy example presented in section 3.1 is more confusing than it is helpful because the skip connection you introduce in the toy example is different from the skip connection you introduce in VANs.[section-NEG], [EMP-NEG]
section
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,929
In the toy example, you add (1 - alpha)wx whereas in the VANs you add (1 - alpha)x.[example-NEG], [EMP-NEG]
example
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,930
Therefore, the type of vanishing gradient that is observed when tanh saturates, which you combat in the toy model, is not actually combated at all in the VAN model.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,931
While it is true that skip connections combat vanishing gradients in certain situations, your example does not capture how this is achieved in VANs.[example-NEG], [EMP-NEG]
example
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,932
- The toy example seems to be an example where Lagrangian relaxation fails, not where it succeeds.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,933
Looking at figure 1, it appears that you start out with some alpha < 1 but then immediately alpha converges to 1, i.e. the skip connection is eliminated early in training, because wx is further away from y than tanh(wx).[figure-NEG], [EMP-NEG]
figure
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,934
Most of the training takes place without the skip connection.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,935
In fact, after 10^4 iterations, training with and without skip connection seem to achieve the same error.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,936
It appears that introducing the skip connection was next to useless and the model failed to recognize the usefulness of the skip connection early in training.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,937
- Regarding the optimization algorithm involving alpha^* at the end of section 3: It looks to me like a hacky, unprincipled method with no guarantees that just happened to work in the particular example you studied.[section-NEG], [EMP-NEG]
section
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,938
You motivate the choice of alpha^* by wanting to maximize the reduction in the local linear approximation to mathcal{C} induced by the update on w.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,939
However, this reduction grows to infinity the larger the update is.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,940
Does that mean that larger updates are always better?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,942
If we wanted to reduce the size of the objective according to the local linear approximation, why wouldn't we choose infinitely large step sizes?[approximation-NEG], [EMP-NEG]
approximation
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,943
Hence, the motivation for the algorithm you present is invalid.[motivation-NEG], [EMP-NEG]
motivation
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,944
Here is an example where this algorithm fails: consider the point (x,y,w,alpha,lambda) (100, sigma(100), 1.0001, 1, 1).[example-NEG], [EMP-NEG]
example
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null