Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
267
The experiments miss some of the more recent baseline in domain adaptation, such as Adversarial Discriminative Domain Adaptation (Tzeng, Eric, et al. 2017).[experiments-NEG], [SUB-NEG]
experiments
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
268
It could be more meaningful to organize the pairs in table by target domain instead of source, for example, grouping 9->9, 8->9, 7->9 and 3->9 in the same block.[table-NEU], [PNF-NEG]
table
null
null
null
null
null
PNF
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
269
DAuto does seem to offer more boost in domain pairs that are less similar.[null], [EMP-NEU]]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
271
(1) The topic of this paper seems to have minimal connection with ICRL.[topic-NEG], [APR-NEG]
topic
null
null
null
null
null
APR
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
272
It might be more appropriate for this paper to be reviewed at a control/optimization conference, so that all the technical analysis can be evaluated carefully.[paper-NEU], [APR-NEG]
paper
null
null
null
null
null
APR
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
273
(2) I am not convinced if the main results are novel.[main results-NEG], [NOV-NEG]
main results
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
274
The convergence of policy gradient does not rely on the convexity of the loss function, which is known in the community of control and dynamic programming.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
275
The convergence of policy gradient is related to the convergence of actor-critic, which is essentially a form of policy iteration. [null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
276
I am not sure if it is a good idea to examine the convergence purely from an optimization perspective.[idea-NEU], [EMP-NEG]
idea
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
277
(3) The main results of this paper seem technical sound.[main results-POS], [EMP-POS]
main results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
278
However, the results seem a bit limited because it does not apply to neural-network function approximator. [results-NEG], [EMP-NEG]
results
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
279
It does not apply to the more general control problem rather than quadratic cost function, which is quite restricted.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
281
I strongly suggest that these results be submitted to a more suitable venue. [results-NEU], [APR-NEG]
results
null
null
null
null
null
APR
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
288
The experimental results are very good and give strong support for the proposed normalization.[experimental results-POS], [EMP-POS]
experimental results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
289
While the main idea is not new to machine learning (or deep learning), to the best of my knowledge it has not been applied on GANs.[main idea-NEG], [NOV-NEG]
main idea
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
290
The paper is overall well written (though check Comment 3 below), it covers the related work well and it includes an insightful discussion about the importance of high rank models.[paper-POS, related work-POS, discussion-POS, models-POS], [CLA-POS, SUB-POS, CMP-POS]
paper
related work
discussion
models
null
null
CLA
SUB
CMP
null
null
POS
POS
POS
POS
null
null
POS
POS
POS
null
null
291
I am recommending acceptance,[null], [REC-POS]
null
null
null
null
null
null
REC
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
292
though I anticipate to see a more rounded evaluation of the exact mechanism under which SN improves over the state of the art.[evaluation-NEU], [SUB-NEU]
evaluation
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
294
Comments: 1. One concern about this paper is that it doesn't fully answer the reasons why this normalization works better.[paper-NEG], [SUB-NEG]
paper
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
295
I found the discussion about rank to be very intuitive,[discussion-POS], [EMP-POS]
discussion
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
296
however this intuition is not fully tested.[null], [SUB-NEG]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
298
The authors claim that other methods, like (Arjovsky et al. 2017) also suffer from the same rank deficiency.[methods-NEU], [EMP-NEU]
methods
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
301
One way to test the rank hypothesis and better explain this method is to run a couple of truncated-SN experiments.[method-NEU, experiments-NEU], [EMP-NEU]
method
experiments
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
302
What happens if you run your SN but truncate its spectrum after every iteration in order to make it comparable to the rank of WN? Do you get comparable inception scores? Or does SN still win?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
303
3. Section 4 needs some careful editing for language and grammar. [Section-NEU, grammar-NEG], [CLA-NEG]
Section
grammar
null
null
null
null
CLA
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
310
Some suggestions / criticisms are given below. 1) The findings seem conceptually similar to the older sparse coding ideas from the visual cortex.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
311
That connection might be worth discussing because removing the regularizing (i.e., metabolic cost) constraint from your RNNS makes them learn representations that differ from the ones seen in EC.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
312
The sparse coding models see something similar: without sparsity constraints, the image representations do not resemble those seen in V1, but with sparsity, the learned representations match V1 quite well.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
313
That the same observation is made in such disparate brain areas (V1, EC) suggests that sparsity / efficiency might be quite universal constraints on the neural code.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
314
2) The finding that regularizing the RNN makes it more closely match the neural code is also foreshadowed somewhat by the 2015 Nature Neuro paper by Susillo et al.[finding-NEU], [CMP-NEU]
finding
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
319
3) Why the different initializations for the recurrent weights for the hexagonal vs other environments?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
320
I'm guessing it's because the RNNs don't work in all environments with the same initialization (i.e., they either don't look like EC, or they don't obtain small errors in the navigation task).[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
321
That seems important to explain more thoroughly than is done in the current text.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
322
4) What happens with ongoing training?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
324
With on-going (continous) training, do the RNN neurons' spatial tuning remain stable, or do they continue to drift (so that border cells turn into grid cells turn into irregular cells, or some such)? [null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
325
That result could make some predictions for experiment, that would be testable with chronic methods (like Ca2+ imaging) that can record from the same neurons over multiple experimental sessions.[result-NEU, experiment-NEU], [EMP-NEU]
result
experiment
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
326
5) It would be nice to more quantitatively map out the relation between speed tuning, direction tuning, and spatial tuning (illustrated in Fig. 3).[Fig-NEU], [SUB-NEU]
Fig
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
327
Specifically, I would quantify the cells' direction tuning using the circular variance methods that people use for studying retinal direction selective neurons.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
328
And I would quantify speed tuning via something like the slope of the firing rate vs speed curves.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
329
And quantify spatial tuning somehow (a natural method would be to use the sparsity measures sometimes applied to neural data to quantify how selective the spatial profile is to one or a few specific locations).[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
330
Then make scatter plots of these quantities against each other.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
331
Basically, I'd love to see the trends for how these types of tuning relate to each other over the whole populations: those trends could then be tested against experimental data (possibly in a future study).[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
338
The reasoning here is that the image feature space may not be semantically organized so that we are not guaranteed that a small perturbation of an image vector will yield image vectors that correspond to semantically similar images (belonging to the same class).[reasoning-NEU], [EMP-NEU]
reasoning
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
346
They claim that these augmentation types provide orthogonal benefits and can be combined to yield superior results.[results-NEU], [EMP-NEU]
results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
347
Overall I think this paper addresses an important problem in an interesting way,[paper-POS, problem-NEU], [EMP-POS]
paper
problem
null
null
null
null
EMP
null
null
null
null
POS
NEU
null
null
null
null
POS
null
null
null
null
348
but there is a number of ways in which it can be improved, detailed in the comments below.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
349
Comments: -- Since the authors are using a pre-trained VGG for to embed each image, I'm wondering to what extent they are actually doing one-shot learning here.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
350
In other words, the test set of a dataset that is used for evaluation might contain some classes that were also present in the training set that VGG was originally trained on.[dataset-NEU], [EMP-NEU]
dataset
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
352
Can the VGG be instead trained from scratch in an end-to-end way in this model?[model-NEU], [EMP-NEU]
model
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
353
-- A number of things were unclear to me with respect to the details of the training process: the feature extractor (VGG) is pre-trained.[training process-NEU], [CLA-NEG]
training process
null
null
null
null
null
CLA
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
354
Is this finetuned during training?[training-NEU], [EMP-NEU]
training
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
355
If so, is this done jointly with the training of the auto-encoder?[training-NEU], [EMP-NEU]
training
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
356
Further, is the auto-encoder trained separately or jointly with the training of the one-shot learning classifier?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
357
-- While the authors have convinced me that data augmentation indeed significantly improves the performance in the domains considered (based on the results in Table 1 and Figure 5a),[performance-POS, results-POS, Table-NEU, Figure-NEU], [EMP-POS]
performance
results
Table
Figure
null
null
EMP
null
null
null
null
POS
POS
NEU
NEU
null
null
POS
null
null
null
null
358
I am not convinced that augmentation in the proposed manner leads to a greater improvement than just augmenting in the image feature domain.[improvement-NEU], [EMP-NEG]
improvement
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
359
In particular, in Table 2, where the different types of augmentation are compared against each other, we observe similar results between augmenting only in the image feature space versus augmenting only in the semantic feature space (ie we observe that FeatG performs similarly as SemG and as SemN).[Table-NEU, results-NEG], [EMP-NEG]
Table
results
null
null
null
null
EMP
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
360
When combining multiple types of augmentation the results are better,[results-POS], [EMP-POS]
results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
362
Specifically, the authors say that for each image they produce 5 additional virtual data points, but when multiple methods are combined, does this mean 5 from each method? Or 5 overall? If it's the former, the increased performance may merely be attributed to using more data.[performance-NEU], [EMP-NEU]
performance
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
364
-- Comparison with existing work: There has been a lot of work recently on one-shot and few-shot learning that would be interesting to compare against.[work-NEU], [CMP-NEU, SUB-NEU]
work
null
null
null
null
null
CMP
SUB
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
365
In particular, mini-ImageNet is a commonly-used benchmark for this task that this approach can be applied to for comparison with recent methods that do not use data augmentation.[benchmark-NEU, task-NEU, comparison-NEU], [CMP-NEU, SUB-NEU]
benchmark
task
comparison
null
null
null
CMP
SUB
null
null
null
NEU
NEU
NEU
null
null
null
NEU
NEU
null
null
null
369
-- A suggestion: As future work I would be very interested to see if this method can be incorporated into common few-shot learning models to on-the-fly generate additional training examples from the support set of each episode that these approaches use for training.[future work-NEU, method-NEU, approaches-NEU], [IMP-NEU]
future work
method
approaches
null
null
null
IMP
null
null
null
null
NEU
NEU
NEU
null
null
null
NEU
null
null
null
null
373
I like the presentation and writing of this paper.[presentation-POS, writing-POS], [CLA-POS, PNF-POS]
presentation
writing
null
null
null
null
CLA
PNF
null
null
null
POS
POS
null
null
null
null
POS
POS
null
null
null
374
However, I find it uneasy to fully evaluate the merit of this paper, mainly because the wide-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected.[results-NEG], [EMP-NEG]
results
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
376
This is not surprising.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
377
It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error. [results-NEU], [EMP-NEU]
results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
380
Overall, I feel that the paper is hard to understand and that it would benefit from more clarity, e.g., section 3.3 states that decoding from the softmax q-distribution is similar to the Bayes decision rule.[paper-NEG, section-NEU], [CLA-NEG, PNF-NEG]
paper
section
null
null
null
null
CLA
PNF
null
null
null
NEG
NEU
null
null
null
null
NEG
NEG
null
null
null
381
Please elaborate on this.[null], [SUB-NEU]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
382
Did you compare to minimum bayes risk decoding which chooses the output with the lowest expected risk amongst a set of candidates?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
384
However, the methods analyzed in this paper also require sampling (cf. Appendix D.2.4 where you mention a sample size of 10),[methods-NEU], [SUB-NEU, EMP-NEU]
methods
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
385
Please explain the difference.[difference-NEU], [SUB-NEU, EMP-NEU]
difference
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
391
An experimental comparison is needed.[experimental comparison-NEU], [CMP-NEU]
experimental comparison
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
392
Cotterell et al., EACL 2017 Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis: This paper also derives a tensor factorization based approach for learning word embeddings for different covariates.[paper-NEU], [EMP-NEU]
paper
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
394
Due to these two citations, the novelty of both the problem set-up of learning different embeddings for each covariate and the novelty of the tensor factorization based model are limited.[citations-NEG, novelty-NEG], [NOV-NEG]
citations
novelty
null
null
null
null
NOV
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
395
The writing is ok.[writing-NEU], [CLA-NEU]
writing
null
null
null
null
null
CLA
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
396
I appreciated the set-up of the introduction with the two questions.[setup-POS, introduction-POS], [PNF-POS]
setup
introduction
null
null
null
null
PNF
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
397
However, the questions themselves could have been formulated differently: Q1: the way Q1 is formulated makes it sound like the covariates could be both discrete and continuous while the method presented later in the paper is only for discrete covariates (i.e. group structure of the data).[questions-NEU], [EMP-NEU]
questions
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
398
Q2: The authors mention topic alignment without specifying what the topics are aligned to.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
399
It would be clearer if they stated explicitly that the alignment is between covariate-specific embeddings.[null], [CLA-NEU]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
400
It is also distracting that they call the embedding dimensions topics.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
402
In the model section, the paragraphs otation and objective function and discussion are clear.[model section-POS], [CLA-POS]
model section
null
null
null
null
null
CLA
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
403
I also liked the idea of having the section A geometric view of embeddings and tensor decomposition, but that section needs to be improved.[idea-POS, section-NEU], [EMP-POS]
idea
section
null
null
null
null
EMP
null
null
null
null
POS
NEU
null
null
null
null
POS
null
null
null
null
404
For example, the authors describe RandWalk (Arora et al. 2016) but how their work falls into that framework is unclear.[work-NEU], [CMP-NEG]
work
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
405
In the third paragraph, starting with Therefore we consider a natural extension of this model, ... it is unclear which model the authors are referring to. (RandWalk or their tensor factorization?).[model-NEG], [CMP-NEG, CLA-NEG]
model
null
null
null
null
null
CMP
CLA
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
406
What are the context vectors in Figure 1? [Figure-NEU], [EMP-NEU]
Figure
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
409
In the last paragraph, beginning with Note that this is essentially saying..., I don't agree with the argument that the base embeddings decompose into independent topics.[paragraph-NEG], [EMP-NEG]
paragraph
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
410
The dimensions of the base embeddings are some kind of latent attributes and each individual dimension could be used by the model to capture a variety of attributes.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
412
Also, the qualitative results in Table 3 do not convince me that the embedding dimensions represent topics.[results-NEG], [EMP-NEG]
results
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
415
Hence, the apparent semantic coherence in what the authors call topics.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
416
The authors present multiple qualitative and quantitative evaluations.[evaluations-POS], [SUB-POS]
evaluations
null
null
null
null
null
SUB
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
417
The clustering by weight (4.1.) is nice and convincing that the model learns something useful.[model-POS], [EMP-POS]
model
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
418
4.2, the only quantitative analysis was missing some details.[quantitative analysis-NEG], [SUB-NEG]
quantitative analysis
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
419
Please give references for the evaluation metrics used, for proper credit and so people can look up these tasks.[references-NEG, tasks-NEU], [SUB-NEG]
references
tasks
null
null
null
null
SUB
null
null
null
null
NEG
NEU
null
null
null
null
NEG
null
null
null
null
420
Also, comparison needed to fitting GloVe on the entire corpus (without covariates) and existing methods Rudolph et al. 2017 and Cotterell et al. 2017.[comparison-NEU], [CMP-NEU]
comparison
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
422
However, for the covariate specific analogies (5.3.) the authors could also analyze word similarities without the analogy component and probably see similar qualitative results.[qualitative results-NEU], [EMP-NEU]
qualitative results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
423
Specifically, they could analyze for a set of query words, what the most similar words are in the embeddings obtained from different subsections of the data.[analyze-NEU], [EMP-NEU]
analyze
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
425
+ the tensor factorization set-up ensures that the embedding dimensions are aligned + clustering by weights (4.1) is useful and seems coherent + covariate-specific analogies are a creative analysis[analysis-POS], [EMP-POS]
analysis
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
426
CONS: - problem set-up not novel and existing approach not cited (experimental comparison needed)[problem setup-NEG], [NOV-NEG]
problem setup
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
427
- interpretation of embedding dimensions as topics not convincing[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
428
- connection to Rand-Walk (Aurora 2016) not stated precisely enough[null], [CLA-NEG]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
429
- quantitative results (Table 1) too little detail: * why is this metric appropriate[quantitative results-NEG], [EMP-NEU]
quantitative results
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEU
null
null
null
null