Unnamed: 0
int64 2
9.3k
| sentence
stringlengths 30
941
| aspect_term_1
stringlengths 1
32
⌀ | aspect_term_2
stringlengths 2
27
⌀ | aspect_term_3
stringlengths 2
23
⌀ | aspect_term_4
stringclasses 25
values | aspect_term_5
stringclasses 7
values | aspect_term_6
stringclasses 1
value | aspect_category_1
stringclasses 9
values | aspect_category_2
stringclasses 9
values | aspect_category_3
stringclasses 9
values | aspect_category_4
stringclasses 2
values | aspect_category_5
stringclasses 1
value | aspect_term_1_polarity
stringclasses 3
values | aspect_term_2_polarity
stringclasses 3
values | aspect_term_3_polarity
stringclasses 3
values | aspect_term_4_polarity
stringclasses 3
values | aspect_term_5_polarity
stringclasses 3
values | aspect_term_6_polarity
stringclasses 1
value | aspect_category_1_polarity
stringclasses 3
values | aspect_category_2_polarity
stringclasses 3
values | aspect_category_3_polarity
stringclasses 3
values | aspect_category_4_polarity
stringclasses 1
value | aspect_category_5_polarity
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8,945 | Here, w has almost converged to its optimum w* 1.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,946 | Correspondingly, the derivative of C is a small negative value.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,947 | However, alpha* is actually 0, and this choice would catapult w far away from w*.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,948 | If I haven't made a mistake in my criticisms above, I strongly suggest removing section 3.1 entirely or replacing it with a completely new example that does not suffer from the above issues.[section-NEG, example-NEG, issues-NEG], [EMP-NEG, PNF-NEG] | section | example | issues | null | null | null | EMP | PNF | null | null | null | NEG | NEG | NEG | null | null | null | NEG | NEG | null | null | null |
8,950 | In the VAN initial state (alpha 0.5), both the residual path and the skip path are multiplied by 0.5 whereas for ResNet, neither is multiplied by 0.5.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
8,951 | Because of this, the experimental results between the two architectures are incomparable.[results-NEG], [CMP-NEG] | results | null | null | null | null | null | CMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,953 | I disagree. Let's look at an example. Consider ResNet first.[example-NEG], [EMP-NEG] | example | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,954 | It can be written as x + r_1 + r_2 + .. + r_B, where r_b is the value computed by residual block b.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,960 | Therefore, there is an open question: are the differences in results between VAN and ResNet in your experiments caused by the removal of skip connections during training or by this scaling?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,961 | Without this information, the experiments have limited value.[information-NEG], [EMP-NEG, SUB-NEG] | information | null | null | null | null | null | EMP | SUB | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
8,963 | If my assessment of the situation is correct, I would like to ask you to repeat your experiments with the following two settings: - ResNet where after each block you multiply the result of the addition by 0.5, i.e. x_{l+1} 0.5mathcal{F}(x_l) + 0.5x_l[experiments-NEG], [SUB-NEG] | experiments | null | null | null | null | null | SUB | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,967 | +++ writing issues +++ Title: - VARIABLE ACTIVATION NETWORKS: A SIMPLE METHOD TO TRAIN DEEP FEED-FORWARD NETWORKS WITHOUT SKIP-CONNECTIONS This title can be read in two different ways.[title-NEG], [CLA-NEG] | title | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,973 | In (B), the `without skip-connections' modifies `deep feed-forward networks' and suggests that the network trained has no skip connections. You must mean (B), because (A) is false.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
8,974 | Since it is not clear from reading the title whether (A) or (B) is true, please reword it.[title-NEG], [CLA-NEG] | title | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,975 | Abstract: - Part of the success of ResNets has been attributed to improvements in the conditioning of the optimization problem (e.g., avoiding vanishing and shattered gradients).[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
8,979 | However, nowhere in your paper do you show that trained VANs have less exploding / vanishing gradients than fully-connected networks trained the old-fashioned way. Again, please reword or include evidence. - where the proposed method is shown to outperform many architectures without skip-connections Again, this sentence makes no sense to me.[proposed method-NEG], [EMP-NEG] | proposed method | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,980 | It seems to imply that VAN has skip connections.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,981 | But in the abstract you defined VAN as an architecture without skip connections.[abstract-NEG], [EMP-NEG] | abstract | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,982 | Please make this more clear.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
8,986 | section 3.1: - replace to to by to in the second line[section-NEG], [CLA-NEG] | section | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,987 | section 4: - This may be a result of the ensemble nature of ResNets (Veit et al., 2016), which does not play a significant role until the depth of the network increases.[section-NEG, result-NEG], [CLA-NEG] | section | result | null | null | null | null | CLA | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
8,988 | The ensemble nature of ResNet is a drawback, not an advantage, because it causes a lack of high-order co-adaptataion of layers.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
8,989 | Therefore, it cannot contribute positively to the performance or ResNet.[performance-NEG], [EMP-NEG] | performance | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,990 | As mentioned in earlier comments, please reword / clarify your use of activation function.[comments-NEG], [CLA-NEG] | comments | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,992 | Change your claim that VAN is equivalent to PReLU.[claim-NEG], [EMP-NEG] | claim | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
8,993 | Please include your description of how your method can be extended to networks which do allow for skip connections.[description-NEG], [SUB-NEU] | description | null | null | null | null | null | SUB | null | null | null | null | NEG | null | null | null | null | null | NEU | null | null | null | null |
8,994 | +++ Hyperparameters +++ Since the initial values of lambda and eta' are new hyperparameters, include the values you chose for them, explain how you arrived at those values and plot the curve of how lambda evolves for at least some of the experiments.[hyperparameters-NEG, values-NEG], [CLA-NEG]] | hyperparameters | values | null | null | null | null | CLA | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
8,999 | This is a strong contribution[contribution-POS], [EMP-POS] | contribution | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,000 | In Table 2 the difference between inception scores for DCGAN and this approach seems significant to ignore.[Table-NEG, approach-NEG], [SUB-NEG] | Table | approach | null | null | null | null | SUB | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
9,001 | The authors should explain more possibly.[null], [SUB-NEG] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,002 | There is a typo in Page 2 u2013 For all these varaints, -variants.[typo-NEG, Page-NEG], [PNF-NEG]] | typo | Page | null | null | null | null | PNF | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
9,006 | While basically the approach seems plausible, the issue is that the result is not compared to ordinary LSTM-based baselines.[result-NEG], [CMP-NEG] | result | null | null | null | null | null | CMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,007 | While it is better than a conterpart of MLE (MaskedMLE), whether the result is qualitatively better than ordinary LSTM is still in question.[result-NEU], [CMP-NEU] | result | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,008 | In fact, this is already appearent both from the model architectures and the generated examples: because the model aims to fill-in blanks from the text around (up to that time), generated texts are generally locally valid but not always valid globally. This issue is also pointed out by authors in Appendix A.2.[issue-NEU], [EMP-NEG] | issue | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEG | null | null | null | null |
9,009 | While the idea of using mask is interesting and important, I think if this idea could be implemented in another way, because it resembles Gibbs sampling where each token is sampled from its sorrounding context, while its objective is still global, sentence-wise.[idea-NEU], [EMP-NEU] | idea | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,010 | As argued in Section 1, the ability of obtaining signals token-wise looks beneficial at first, but it will actually break a global validity of syntax and other sentence-wise phenoma.[Section-NEU], [EMP-NEG] | Section | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEG | null | null | null | null |
9,011 | Based on the arguments above, I think this paper is valuable at least conceptually, but doubt if it is actually usable in place of ordinary LSTM (or RNN)-based generation.[paper-NEU], [EMP-NEU] | paper | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,012 | More arguments are desirable for the advantage of this paper, i.e. quantitative evaluation of diversity of generated text as opposed to LSTM-based methods.[arguments-NEU], [SUB-NEU] | arguments | null | null | null | null | null | SUB | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,013 | *Based on the rebuttals and thorough experimental results, I modified the global rating.[rating-NEU], [REC-NEU] | rating | null | null | null | null | null | REC | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,017 | The results seem to show that a delayed application of the regularization parameter leads to improved classification performance.[results-POS, performance-POS], [EMP-POS] | results | performance | null | null | null | null | EMP | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
9,018 | The proposed scheme, which delays the application of regularization parameter, seems to be in contrast of the continuation approach used in sparse learning.[proposed scheme-NEU], [EMP-NEU] | proposed scheme | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,020 | One may argue that the continuation approach is applied in the convex optimization case, while the one proposed in this paper is for non-convex optimization. [approach-NEU], [EMP-NEU] | approach | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,021 | It would be interesting to see whether deep networks can benefit from the continuation approach, and the strong regularization parameter may not be an issue because the regularization parameter decreases as the optimization progress goes on.[approach-NEU], [EMP-NEU] | approach | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,022 | One limitation of the work, as pointed by the authors, is that experimental results on big data sets such as ImageNet is not reported.[limitation-NEG, experimental results-NEG], [SUB-NEG, IMP-NEU] | limitation | experimental results | null | null | null | null | SUB | IMP | null | null | null | NEG | NEG | null | null | null | null | NEG | NEU | null | null | null |
9,025 | The main positive point is that the performance does not degrade too much.[performance-NEU], [EMP-POS] | performance | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | POS | null | null | null | null |
9,026 | However, there are several important negative points which should prevent this work, as it is, from being published.[work-NEG], [REC-NEU] | work | null | null | null | null | null | REC | null | null | null | null | NEG | null | null | null | null | null | NEU | null | null | null | null |
9,027 | 1. Why is this type of color channel modification relevant for real life vision?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,028 | The invariance introduced here does not seem to be related to any real world phenomenon. [null], [CMP-NEG] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,029 | The nets, in principle, could learn to recognize objects based on shape only, and the shape remains stable when the color channels are changed.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,030 | 2. Why is the crash car dataset used in this scenario?[null], [IMP-NEG] | null | null | null | null | null | null | IMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,031 | It is not clear to me why this types of theoretical invariance is tested on such as specific dataset.[dataset-NEU], [EMP-NEG] | dataset | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEG | null | null | null | null |
9,032 | Is there a real reason for that?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,033 | 3. The writing could be significantly improved, both at the grammatical level and the level of high level organization and presentation.[writing-NEG], [CLA-NEG, PNF-NEG] | writing | null | null | null | null | null | CLA | PNF | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
9,034 | I think the authors should spend time on better motivating the choice of invariance used, as well as on testing with different (potentially new) architectures, color change cases, and datasets.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,035 | 4. There is no theoretical novelty and the empirical one seems to be very limited, with less convincing results.[novelty-NEG, results-NEG], [NOV-NEG, EMP-NEG] | novelty | results | null | null | null | null | NOV | EMP | null | null | null | NEG | NEG | null | null | null | null | NEG | NEG | null | null | null |
9,039 | The paper does not really introduce new methods, and as such, this paper should be seen more as an application paper.[methods-NEG, paper-NEG], [APR-NEG, NOV-NEG] | methods | paper | null | null | null | null | APR | NOV | null | null | null | NEG | NEG | null | null | null | null | NEG | NEG | null | null | null |
9,040 | I think that such a paper could have merits if it would really push the boundary of the feasible, but I do not think that is really the case with this paper: the task still seems quite simplistic, and the empirical evaluation is not convincing (limited analysis, weak baselines).[paper-NEU, task-NEG, empirical evaluation-NEG], [EMP-NEG] | paper | task | empirical evaluation | null | null | null | EMP | null | null | null | null | NEU | NEG | NEG | null | null | null | NEG | null | null | null | null |
9,041 | As such, I do not really see any real grounds for acceptance.[acceptance-NEG], [REC-NEG] | acceptance | null | null | null | null | null | REC | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,042 | Finally, there are also many other weaknesses. The paper is quite poorly written in places, has poor formatting (citations are incorrect and half a bibtex entry is inlined), and is highly inadequate in its treatment of related work.[paper-NEG, formatting-NEG, related work-NEG], [CLA-NEG, PNF-NEG] | paper | formatting | related work | null | null | null | CLA | PNF | null | null | null | NEG | NEG | NEG | null | null | null | NEG | NEG | null | null | null |
9,046 | Overall, I see this as a paper which with improvements could make a nice workshop contribution, but not as a paper to be published at a top-tier venue.[paper-NEU, improvements-NEU], [APR-NEG]] | paper | improvements | null | null | null | null | APR | null | null | null | null | NEU | NEU | null | null | null | null | NEG | null | null | null | null |
9,047 | This work fits well into a growing body of research concerning the encoding of network topologies and training of topology via evolution or RL.[work-POS], [IMP-POS] | work | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,049 | The biggest two nitpicks: > In our work we pursue an alternative approach: instead of restricting the search space directly, we allow the architectures to have flexible network topologies (arbitrary directed acyclic graphs) This is a gross overstatement.[work-NEU, alternative approach-NEU], [EMP-NEG] | work | alternative approach | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEG | null | null | null | null |
9,050 | The architectures considered in this paper are heavily restricted to be a stack of cells of uniform content interspersed with specifically and manually designed convolution, separable convolution, and pooling layers.[architectures-NEG], [EMP-NEG] | architectures | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,052 | The work is still great, but this misleading statement in the beginning of the paper left the rest of the paper with a dishonest aftertaste.[work-POS, statement-NEG, paper-NEG], [EMP-NEG] | work | statement | paper | null | null | null | EMP | null | null | null | null | POS | NEG | NEG | null | null | null | NEG | null | null | null | null |
9,053 | As an exercise to the authors, count the hyperparameters used just to set up the learning problem in this paper and compare them to those used in describing the entire VGG-16 network.[null], [CMP-NEU] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,055 | to restrict the search space to reduce complexity and increase efficiency of architecture search.[paper-NEG], [EMP-NEG] | paper | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,056 | > Table 1 Why is the second best method on CIFAR ("Hier. repr-n, random search (7000 samples)") never tested on ImageNet?[method-NEG], [SUB-NEG] | method | null | null | null | null | null | SUB | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,057 | The omission is conspicuous.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,058 | Just test it and report.[null], [SUB-NEG] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,060 | " "Evolutionary Strategies", at least as used in Salimans 2017, has a specific connotation of estimating and then following a gradient using random perturbations which this paper does not do.[paper-NEG], [SUB-NEG, CMP-NEG] | paper | null | null | null | null | null | SUB | CMP | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
9,061 | It may be more clear to change this phrase to "evolutionary methods" or similar.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,063 | A K 5% tournament does not seem more generic than a binary K 2 tournament. They're just different.[null], [CMP-NEG]] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
9,070 | Intuitively, one can see why this may be advantageous as one gets some information from the past.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,071 | (As an aside, the authors of course acknowledge that recurrent neural networks have been used for this purpose with varying degrees of success.)[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,072 | The first question, had a quite an interesting and cute answer. There is a (non-negative) importance weight associated with each state and a collection of states has weight that is simply the product of the weights.[answer-POS], [EMP-POS] | answer | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,073 | The authors claim (with some degree of mathematical backing) that sampling a memory of n states where the distribution over the subsets of past states of size n is proportional to the product of the weights is desired. And they give a cute online algorithm for this purpose.[algorithm-NEU], [EMP-NEU] | algorithm | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,075 | . There is no easy way to fix this and for the purpose of sampling the paper simply treats the weights as immutable.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,076 | There is also a toy example created to show that this approach works well compared to the RNN based approaches.[approach-NEU], [CMP-NEU] | approach | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,077 | Positives: - An interesting new idea that has potential to be useful in RL[idea-POS], [NOV-POS] | idea | null | null | null | null | null | NOV | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,078 | - An elegant algorithm to solve at least part of the problem properly (the rest of course relies on standard SGD methods to train the various networks)[algorithm-POS], [EMP-POS] | algorithm | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,079 | Negatives: - The math is fudged around quite a bit with approximations that are not always justified[math-NEG], [EMP-NEG] | math | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,080 | - While overall the writing is clear, in some places I feel it could be improved[writing-NEU], [CLA-NEU] | writing | null | null | null | null | null | CLA | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,081 | . I had a very hard time understanding the set-up of the problem in Figure 2.[setup-NEG, Figure-NEG], [PNF-NEG] | setup | Figure | null | null | null | null | PNF | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
9,084 | - The experiments only demonstrate the superiority of this method on an example chosen artificially to work well with this approach.[experiments-NEG, approach-NEU], [EMP-NEG] | experiments | approach | null | null | null | null | EMP | null | null | null | null | NEG | NEU | null | null | null | null | NEG | null | null | null | null |
9,087 | My main concerns are on the usage of the given observations.[observations-NEG], [IMP-NEG] | observations | null | null | null | null | null | IMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,088 | 1. Can the observations be used to explain more recent works?[observations-NEG, recent works-NEU], [CMP-NEU] | observations | recent works | null | null | null | null | CMP | null | null | null | null | NEG | NEU | null | null | null | null | NEU | null | null | null | null |
9,090 | However, as the authors mentioned, there are more recent works which give better performance than this one.[recent works-NEG, performance-NEG], [CMP-NEG] | recent works | performance | null | null | null | null | CMP | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
9,091 | For example, we can use +1, 0, -1 to approximate the weights.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,093 | has also shown a carefully designed post-processing binary network can already give very good performance.[performance-NEG], [EMP-NEG] | performance | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,094 | So, how can the given observations be used to explain more recent works?[observations-NEG, recent works-NEU], [CMP-NEU] | observations | recent works | null | null | null | null | CMP | null | null | null | null | NEG | NEU | null | null | null | null | NEU | null | null | null | null |
9,095 | 2. How can the given observations be used to improve Courbariaux, Hubara et al. (2016)?[observations-NEU], [EMP-NEU] | observations | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,097 | From this perspective, I wish to see more mathematical analysis rather than just doing experiments and showing some interesting observations.[analysis-NEG, experiments-NEG, observations-NEU], [SUB-NEG] | analysis | experiments | observations | null | null | null | SUB | null | null | null | null | NEG | NEG | NEU | null | null | null | NEG | null | null | null | null |
9,098 | Besides, giving interesting observations is not good enough.[observations-NEG], [SUB-NEG] | observations | null | null | null | null | null | SUB | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
9,099 | I wish to see how they can be used to improve binary networks.[null], [SUB-NEU] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
9,104 | Considered paper is one of the first approaches to learn GAN-type generative models.[paper-POS], [NOV-POS] | paper | null | null | null | null | null | NOV | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,105 | Using PointNet architecture and latent-space GAN, the authors obtained rather accurate generative model.[model-POS], [EMP-POS] | model | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,106 | The paper is well written, results of experiments are convincing, the authors provided the code on the github, realizing their architectures.[paper-POS, results-POS, experiments-POS, architectures-NEU], [CLA-POS, EMP-POS] | paper | results | experiments | architectures | null | null | CLA | EMP | null | null | null | POS | POS | POS | NEU | null | null | POS | POS | null | null | null |
9,107 | Thus I think that the paper should be published.[paper-POS], [REC-POS] | paper | null | null | null | null | null | REC | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
9,113 | There have existed several works which also provide surveys of attribute-aware collaborative filtering.[works-NEU], [CMP-NEU] | works | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
9,114 | Hence, the contribution of this paper is limited, although the authors claim two differences between their work and the existing ones.[contribution-NEG, paper-NEG, work-NEG], [SUB-NEG, CMP-NEG] | contribution | paper | work | null | null | null | SUB | CMP | null | null | null | NEG | NEG | NEG | null | null | null | NEG | NEG | null | null | null |