Unnamed: 0
int64 2
9.3k
| sentence
stringlengths 30
941
| aspect_term_1
stringlengths 1
32
⌀ | aspect_term_2
stringlengths 2
27
⌀ | aspect_term_3
stringlengths 2
23
⌀ | aspect_term_4
stringclasses 25
values | aspect_term_5
stringclasses 7
values | aspect_term_6
stringclasses 1
value | aspect_category_1
stringclasses 9
values | aspect_category_2
stringclasses 9
values | aspect_category_3
stringclasses 9
values | aspect_category_4
stringclasses 2
values | aspect_category_5
stringclasses 1
value | aspect_term_1_polarity
stringclasses 3
values | aspect_term_2_polarity
stringclasses 3
values | aspect_term_3_polarity
stringclasses 3
values | aspect_term_4_polarity
stringclasses 3
values | aspect_term_5_polarity
stringclasses 3
values | aspect_term_6_polarity
stringclasses 1
value | aspect_category_1_polarity
stringclasses 3
values | aspect_category_2_polarity
stringclasses 3
values | aspect_category_3_polarity
stringclasses 3
values | aspect_category_4_polarity
stringclasses 1
value | aspect_category_5_polarity
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
430 | ? * comparison to GloVe on the entire corpus (not covariate specific) * no reference for the metrics used (AP, BLESS, etc.?)[comparison-NEG, reference-NEG], [CMP-NEG] | comparison | reference | null | null | null | null | CMP | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
431 | - covariate specific analogies presented confusingly and similar but simpler analysis might be possible by looking at variance in neighbours v_b and v_d without involving v_a and v_c (i.e. don't talk about analogies but about similarities)[null], [EMP-NEU]] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
436 | I'm not sure there is something specific I'm proposing here, I do understand the value of the formulation given in the work, I just find it strange that model based RL is not mention at all in the paper.[work-NEG], [EMP-NEG] | work | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
437 | I think reading the paper, it should be much clearer how the embedding is computed for Atari, and how this choice was made.[paper-NEG], [EMP-NEG] | paper | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
438 | Going through the paper I'm not sure I know how this latent space is constructed.[paper-NEG], [CLA-NEG] | paper | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
439 | This however should be quite important.[null], [IMP-NEU] | null | null | null | null | null | null | IMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
440 | The goal function tries to predict states in this latent space.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
441 | So the simpler the structure of this latent space, the easier it should be to train a goal function, and hence quickly adapt to the current reward scheme.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
449 | What hyper-parameters are used.[hyperparameters-NEG], [CLA-NEG] | hyperparameters | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
450 | What is the variance between the seeds.[null], [SUB-NEG] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
451 | I feel that while the proposed solution is very intuitive, and probably works as described,[proposed solution-POS], [EMP-POS] | proposed solution | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
452 | the paper does not do a great job at properly comparing with baseline and make sure the results are solid.[paper-NEG, baseline-NEG, results-NEG], [CMP-NEG] | paper | baseline | results | null | null | null | CMP | null | null | null | null | NEG | NEG | NEG | null | null | null | NEG | null | null | null | null |
453 | In particular looking at Riverraid-new is the advantage you have there significant?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
454 | How does the game do on the original task?[task-NEU], [EMP-NEU] | task | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
455 | The plots could also use a bit of help.[plots-NEU], [EMP-NEU] | plots | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
456 | Lines should be thicker.[Lines-NEG], [PNF-NEG] | Lines | null | null | null | null | null | PNF | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
457 | Even when zooming, distinguishing between colors is not easy.[colors-NEG], [PNF-NEG] | colors | null | null | null | null | null | PNF | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
458 | Because there are more than two lines in some plots, it can also hurt people that can't distinguish colors easily.[lines-NEG, colors-NEG], [PNF-NEG]] | lines | colors | null | null | null | null | PNF | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
461 | In general, this is an interesting direction to explore, the idea is interesting,;[idea-POS], [IMP-POS] | idea | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
462 | however, I would like to see more experiments.[experiments-NEU], [SUB-NEU] | experiments | null | null | null | null | null | SUB | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
465 | 2. The experimental results are fairly weak compared to the other methods that also uses many layers.[experimental results-NEG, other methods-NEU], [CMP-NEU] | experimental results | other methods | null | null | null | null | CMP | null | null | null | null | NEG | NEU | null | null | null | null | NEU | null | null | null | null |
466 | For PTB and Text8, the results are comparable to recurrent batchnorm with similar number of parameters, however the recurrent batchnorm model has only 1 layer, whereas the proposed architecture has 36 layers.[results-NEU, proposed architecture-NEU], [EMP-NEU] | results | proposed architecture | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
467 | 3. It would also be nice to show results on tasks that involve long term dependencies, such as speech modeling.[results-NEU], [EMP-NEU] | results | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
468 | 4. If the authors could test out the new activation function on LSTMs, it would be interesting to perform a comparison between LSTM baseline, LSTM + new activation function, LSTM + recurrent batch norm.[comparison-NEU], [CMP-NEU] | comparison | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
469 | 5. It would be nice to see the gradient flow with the new activation function compared to the ones without.[null], [CMP-NEU] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
470 | 6. The theorems and proofs are rather preliminary, they may not necessarily have to be presented as theorems.[theorems-NEG, proofs-NEG], [PNF-NEU] | theorems | proofs | null | null | null | null | PNF | null | null | null | null | NEG | NEG | null | null | null | null | NEU | null | null | null | null |
474 | The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate.[framework-POS], [EMP-POS] | framework | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
475 | Although probably difficult for someone to understand that is not already familiar with VAE models, I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context.[paper-POS], [CLA-POS, PNF-POS, CMP-POS] | paper | null | null | null | null | null | CLA | PNF | CMP | null | null | POS | null | null | null | null | null | POS | POS | POS | null | null |
476 | From a novelty standpoint though, the paper is not especially strong given that it represents a fairly straightforward application of (Andrychowicz et al., 2016).[paper-NEG], [NOV-NEG, CMP-NEG] | paper | null | null | null | null | null | NOV | CMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
477 | Indeed the paper perhaps anticipates this perspective and preemptively offers that variational inference is a qualitatively different optimization problem than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work.[prior work-NEG], [NOV-NEG, CMP-NEG] | prior work | null | null | null | null | null | NOV | CMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
478 | But to me, these are rather minor differentiating factors, since learning-to-learn is a quite general concept already, and the exact model structure is not the key novel ingredient.[model structure-NEG], [NOV-NEG, CMP-NEG] | model structure | null | null | null | null | null | NOV | CMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
479 | That being said, the present use for variational inference nonetheless seems like a nice application, and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.[paper-POS, Section-POS], [EMP-POS] | paper | Section | null | null | null | null | EMP | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
481 | While these results are enlightening,[results-POS], [EMP-POS] | results | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
482 | most of the conclusions are not entirely unexpected.[conclusions-NEG], [EMP-NEG] | conclusions | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
483 | For example, given that the model is directly trained with the iterative inference criteria in place, the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result.[model-POS, Fig-POS, respect-POS], [EMP-POS] | model | Fig | respect | null | null | null | EMP | null | null | null | null | POS | POS | POS | null | null | null | POS | null | null | null | null |
485 | And there is no demonstration of reconstruction quality relative to existing models, which could be helpful for evaluating relative performance.[existing models-NEG], [SUB-NEG, CMP-NEG] | existing models | null | null | null | null | null | SUB | CMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
487 | In terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera; however, is this really an apples-to-apples comparison?[Table-POS, proposed approach-POS], [EMP-POS] | Table | proposed approach | null | null | null | null | EMP | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
488 | For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs? Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration.[eq-NEU], [EMP-NEU] | eq | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
489 | In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.[results-NEG], [EMP-NEG] | results | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
490 | Other minor comment: * In Fig. 5(a), it seems like the performance of the standard inference model is still improving[performance-POS], [EMP-POS] | performance | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
491 | but the iterative inference model has mostly saturated.[model-NEG], [EMP-NEG] | model | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
492 | * A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time, whereas the standard VAE model would not.[model-NEG, paper-NEG], [SUB-NEG]] | model | paper | null | null | null | null | SUB | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
493 | This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models.[paper-NEU, previous results-NEU], [EMP-NEU] | paper | previous results | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
494 | It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
495 | The idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community.[idea-POS], [IMP-POS] | idea | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
496 | This work makes a pretty significant contribution to such topic.[work-POS, contribution-POS], [IMP-POS] | work | contribution | null | null | null | null | IMP | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
497 | It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy.[techniques-NEU, previous work-NEU], [EMP-NEU] | techniques | previous work | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
498 | The experiment shows good privacy and utility.[experiment-POS], [EMP-POS] | experiment | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
499 | The presentation of the paper can be improved a bit.[presentation-NEG], [PNF-NEG] | presentation | null | null | null | null | null | PNF | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
500 | For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper.[section-NEU], [PNF-NEU] | section | null | null | null | null | null | PNF | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
501 | Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work. [introduction-NEU, contributions-NEU], [PNF-NEU] | introduction | contributions | null | null | null | null | PNF | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
505 | A nice series of experimental validations demonstrate the various types of interactions can be detected, while it also fairly clarifies the limitations.[experimental validations-POS, limitations-NEU], [EMP-POS] | experimental validations | limitations | null | null | null | null | EMP | null | null | null | null | POS | NEU | null | null | null | null | POS | null | null | null | null |
507 | But given the flexibility of function representations, the use of neural networks would be worth rethinking, and this work would give one clear example. I liked the overall ideas which is clean and simple, but also found several points still confusing and unclear.[ideas-POS], [EMP-POS, CLA-NEU] | ideas | null | null | null | null | null | EMP | CLA | null | null | null | POS | null | null | null | null | null | POS | NEU | null | null | null |
508 | 1) One of the keys behind this method is the architecture described in 4.1.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
509 | But this part sounds quite heuristic, and it is unclear to me how this can affect to the facts such as Theorem 4 and Algorithm 1.[Theorem-NEG], [EMP-NEG] | Theorem | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
510 | Absorbing the main effect is not critical to these facts?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
511 | In a standard sense of statistics, interaction would be something like residuals after removing the main (additive) effect. (like a standard test by a likelihood ratio test for models with vs without interactions)[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
512 | 2) the description about the neural network for the main effect is a bit unclear.[description-NEG], [EMP-NEG] | description | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
513 | For example, what does exactly mean the 'networks with univariate inputs for each input variable'?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
514 | Is my guessing that it is a 1-10-10-10-1 network (in the experiments) correct...? [null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
515 | Also, do g_i and g_i' in the GAM model (sec 4.3) correspond to the two networks for the main and interaction effects respectively?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
516 | 3) mu is finally fixed at min function, and I'm not sure why this is abstracted throughout the manuscript.[manuscript-NEU], [EMP-NEU] | manuscript | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
517 | Is it for considering the requirements for any possible criteria?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
518 | Pros: - detecting (any order / any form of) statistical interactions by neural networks is provided.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
519 | - nice experimental setup and evaluations with comparisons to relevant baselines by ANOVA, HierLasso, and Additive Groves.[experimental setup-POS], [EMP-POS] | experimental setup | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
520 | Cons: - some parts of explanations to support the idea has unclear relationship to what was actually done, in particular, for how to cancel out the main effect.[explanations-NEG], [EMP-NEG] | explanations | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
521 | - the neural network architecture with L1 regularization is a bit heuristic, and I'm not surely confident that this architecture can capture only the interaction effect by cancelling out the main effect. [architecture-NEG], [EMP-NEG] | architecture | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
524 | While the idea is sound,[idea-POS], [EMP-POS] | idea | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
525 | many design choices of the system is questionable.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
526 | The problem is particularly aggravated by the poor presentation of the paper, creating countless confusions for readers.[presentation-NEG], [PNF-NEG] | presentation | null | null | null | null | null | PNF | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
527 | I do not recommend the acceptance of this draft.[acceptance-NEG], [REC-NEG] | acceptance | null | null | null | null | null | REC | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
528 | Compared with GAN, traditional graph analytics is model-specific and non-adaptive to training data.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
529 | This is also the case for hierarchical community structures.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
530 | By building the whole architecture on the Louvain method, the proposed method is by no means truly model-agnostic.[architecture-NEU, proposed method-NEG], [EMP-NEG] | architecture | proposed method | null | null | null | null | EMP | null | null | null | null | NEU | NEG | null | null | null | null | NEG | null | null | null | null |
531 | In fact, if the layers are fine enough, a significant portion of the network structure will be captured by the sum-up module instead of the GAN modules, rendering the overall behavior dominated by the community detection algorithm.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
532 | The evaluation remains superficial with minimal quantitative comparisons.[evaluation-NEG], [CMP-NEG, SUB-NEG] | evaluation | null | null | null | null | null | CMP | SUB | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
533 | Treating degree distribution and clustering coefficient (appeared as cluster coefficient in draft) as global features is problematic.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
535 | The writing of the draft leaves much to be desired.[writing-NEG], [CLA-NEG] | writing | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
536 | The description of the architecture is confusing with design choices never clearly explained.[description-NEG, architecture-NEU], [PNF-NEG] | description | architecture | null | null | null | null | PNF | null | null | null | null | NEG | NEU | null | null | null | null | NEG | null | null | null | null |
537 | Multiple concepts needs better introduction, including the very name of their model GTI and the idea of stage identification.[concepts-NEG], [EMP-NEG] | concepts | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
538 | Not to mention numerous grammatical errors, I suggest the authors seek professional English writing services.[grammatical errors-NEG], [CLA-NEG] | grammatical errors | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
541 | Firstly, I suggest the authors rewrite the end of the introduction.[introduction-NEG], [CLA-NEG] | introduction | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
542 | The current version tends to mix everything together and makes the misleading claim.[claim-NEG], [CLA-NEG] | claim | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
543 | When I read the paper, I thought the speeding up mechanism could give both speed up and performance boost, and lead to the 82.2 F1.[performance-NEU], [EMP-NEU] | performance | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
544 | But it turns out that the above improvements are achieved with at least three different ideas: (1) the CNN+self-attention module; (2) the entire model architecture design; and (3) the data augmentation method.[improvements-NEU], [EMP-NEU] | improvements | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
545 | Secondly, none of the above three ideas are well evaluated in terms of both speedup and RC performance, and I will comment in details as follows:[performance-NEG], [EMP-NEG] | performance | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
546 | (1) The CNN+self-attention was mainly borrowing the idea from (Vaswani et al., 2017a) from NMT to RC.[idea-NEU], [NOV-NEU] | idea | null | null | null | null | null | NOV | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
547 | The novelty is limited[novelty-NEG], [NOV-NEG] | novelty | null | null | null | null | null | NOV | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
548 | but it is a good idea to speed up the RC models.[idea-POS], [EMP-POS] | idea | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
549 | However, as the authors hoped to claim that this module could contribute to both speedup and RC performance, it will be necessary to show the RC performance of the same model architecture, but replacing the CNNs with LSTMs.[performance-NEU, model architecture-NEU], [SUB-NEU, EMP-NEU] | performance | model architecture | null | null | null | null | SUB | EMP | null | null | null | NEU | NEU | null | null | null | null | NEU | NEU | null | null | null |
550 | Only if the proposed architecture still gives better results, the claims in the introduction can be considered correct.[proposed architecture-NEU, results-NEU, claims-NEU], [EMP-NEU] | proposed architecture | results | claims | null | null | null | EMP | null | null | null | null | NEU | NEU | NEU | null | null | null | NEU | null | null | null | null |
551 | (2) I feel that the model design is the main reason for the good overall RC performance.[model design-NEU, performance-NEU], [EMP-NEU] | model design | performance | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
552 | However, in the paper there is no motivation about why the architecture was designed like this.[motivation-NEG, architecture-NEU], [SUB-NEG] | motivation | architecture | null | null | null | null | SUB | null | null | null | null | NEG | NEU | null | null | null | null | NEG | null | null | null | null |
553 | Moreover, the whole model architecture is only evaluated on the SQuAD dataset.[dataset-NEG], [SUB-NEG, EMP-NEG] | dataset | null | null | null | null | null | SUB | EMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
554 | As a result, it is not convincing that the system design has good generalization.[system design-NEG], [EMP-NEG] | system design | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
555 | If in (1) it is observed that using LSTMs in the model instead of CNNs could give on par or better results, it will be necessary to test the proposed model architecture on multiple datasets, as well as conducting more ablation tests about the model architecture itself.[results-NEU, proposed model architecture-NEU, datasets-NEU], [EMP-NEG] | results | proposed model architecture | datasets | null | null | null | EMP | null | null | null | null | NEU | NEU | NEU | null | null | null | NEG | null | null | null | null |
556 | (3) I like the idea of data augmentation with paraphrasing.[idea-POS], [EMP-POS] | idea | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
557 | Currently, the improvement is only marginal,[improvement-NEU], [EMP-NEU] | improvement | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
559 | For example, training NMT models with larger parallel corpora; training NMT models with different language pairs with English as the pivot; and better strategies to select the generated passages for data augmentation.[null], [IMP-POS] | null | null | null | null | null | null | IMP | null | null | null | null | null | null | null | null | null | null | POS | null | null | null | null |
560 | n I am looking forward to the test performance of this work on SQuAD.[performance-NEU], [SUB-NEU] | performance | null | null | null | null | null | SUB | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |