Unnamed: 0
int64 2
9.3k
| sentence
stringlengths 30
941
| aspect_term_1
stringlengths 1
32
⌀ | aspect_term_2
stringlengths 2
27
⌀ | aspect_term_3
stringlengths 2
23
⌀ | aspect_term_4
stringclasses 25
values | aspect_term_5
stringclasses 7
values | aspect_term_6
stringclasses 1
value | aspect_category_1
stringclasses 9
values | aspect_category_2
stringclasses 9
values | aspect_category_3
stringclasses 9
values | aspect_category_4
stringclasses 2
values | aspect_category_5
stringclasses 1
value | aspect_term_1_polarity
stringclasses 3
values | aspect_term_2_polarity
stringclasses 3
values | aspect_term_3_polarity
stringclasses 3
values | aspect_term_4_polarity
stringclasses 3
values | aspect_term_5_polarity
stringclasses 3
values | aspect_term_6_polarity
stringclasses 1
value | aspect_category_1_polarity
stringclasses 3
values | aspect_category_2_polarity
stringclasses 3
values | aspect_category_3_polarity
stringclasses 3
values | aspect_category_4_polarity
stringclasses 1
value | aspect_category_5_polarity
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
129 | The writing of the paper is also very unclear, with several repetitions and many typos e.g.: 'we first introduce you a' 'architexture' 'future work remain to' 'it self' I believe there is a lot of potential in the approach(es) presented in the paper.[writing-NEG, typos-NEG], [CLA-NEG] | writing | typos | null | null | null | null | CLA | null | null | null | null | NEG | NEG | null | null | null | null | NEG | null | null | null | null |
130 | In my view a much stronger experimental section together with a clearer presentation and discussion could overcome the lack of theoretical discussion.[experimental section-NEU, theoretical discussion-NEG], [EMP-NEU, SUB-NEG] | experimental section | theoretical discussion | null | null | null | null | EMP | SUB | null | null | null | NEU | NEG | null | null | null | null | NEU | NEG | null | null | null |
134 | Using this setup, the authors are able to beat sequence to sequence baselines on problems that are amenable to such an approach.[setup-POS, baselines-NEU, problems-NEU, approach-NEU], [EMP-POS] | setup | baselines | problems | approach | null | null | EMP | null | null | null | null | POS | NEU | NEU | NEU | null | null | POS | null | null | null | null |
136 | In all three cases, the proposed solution outperforms the baselines on larger problem instances. [proposed solution-POS, baselines-NEU], [EMP-POS] | proposed solution | baselines | null | null | null | null | EMP | null | null | null | null | POS | NEU | null | null | null | null | POS | null | null | null | null |
138 | Quality This is a very clear contribution which elegantly demonstrates the use of extensions of GAN variants in the context of neuroimaging.[contribution-POS], [CLA-POS, IMP-POS] | contribution | null | null | null | null | null | CLA | IMP | null | null | null | POS | null | null | null | null | null | POS | POS | null | null | null |
139 | Clarity The paper is well-written.[paper-POS], [CLA-POS] | paper | null | null | null | null | null | CLA | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
140 | Methods and results are clearly described.[Methods-POS, results-POS], [CLA-POS] | Methods | results | null | null | null | null | CLA | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
141 | The authors state significant improvements in classification using generated data.[improvements-POS], [EMP-POS] | improvements | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
142 | These claims should be substantiated with significance tests comparing classification on standard versus augmented datasets.[claims-NEU], [EMP-NEU] | claims | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
143 | Originality This is one of the first uses of GANs in the context of neuroimaging.[null], [NOV-POS] | null | null | null | null | null | null | NOV | null | null | null | null | null | null | null | null | null | null | POS | null | null | null | null |
144 | Significance The approach outlined in this paper may spawn a new research direction.[approach-POS], [IMP-POS] | approach | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
145 | Pros Well-written and original contribution demonstrating the use of GANs in the context of neuroimaging.[contribution-POS], [CLA-POS, NOV-POS] | contribution | null | null | null | null | null | CLA | NOV | null | null | null | POS | null | null | null | null | null | POS | POS | null | null | null |
146 | Cons The focus on neuroimaging might be less relevant to the broader AI community.[null], [IMP-NEG] | null | null | null | null | null | null | IMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
149 | This is a significant topic with implications for quantization for computational efficiency, as well as for exploring the space of learning algorithms for deep networks.[topic-POS], [IMP-POS] | topic | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
150 | While none of the contributions are especially novel,[contributions-NEG], [NOV-NEG] | contributions | null | null | null | null | null | NOV | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
151 | the analysis is clear and well-organized, and the authors do a nice job in connecting their analysis to other work.[analysis-POS], [CLA-POS, PNF-NEG, CMP-POS]] | analysis | null | null | null | null | null | CLA | PNF | CMP | null | null | POS | null | null | null | null | null | POS | NEG | POS | null | null |
153 | Overall, the paper is sloppily put together, so it's a little difficult to assess the completeness of the ideas.[paper-NEG, ideas-NEG], [PNF-NEG, CLA-NEG] | paper | ideas | null | null | null | null | PNF | CLA | null | null | null | NEG | NEG | null | null | null | null | NEG | NEG | null | null | null |
154 | The problem being solved is not literally the problem of decreasing the amount of data needed to learn tasks, but a reformulation of the problem that makes it unnecessary to relearn subtasks.[problem-NEG], [EMP-NEG] | problem | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
155 | That's a good idea, but problem reformulation is always hard to justify without returning to a higher level of abstraction to justify that there's a deeper problem that remains unchanged.[idea-NEU], [EMP-NEU] | idea | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
156 | The paper doesn't do a great job of making that connection.[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
157 | The idea of using task decomposition to create intrinsic rewards seems really interesting,[idea-POS], [EMP-POS] | idea | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
158 | but does not appear to be explored in any depth.[null], [SUB-NEG] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
159 | Are there theorems to be had?[theorems-NEU], [EMP-NEU] | theorems | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
160 | Is there a connection to subtasks rewards in earlier HRL papers?[null], [CMP-NEU] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
161 | The lack of completeness (definitions of tasks and robustness) also makes the paper less impactful than it could be.[paper-NEG], [IMP-NEG, SUB-NEG] | paper | null | null | null | null | null | IMP | SUB | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
162 | Detailed comments: learn hierarchical policies -> learns hierarchical policies?[null], [PNF-NEU] | null | null | null | null | null | null | PNF | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
163 | games Mnih et al. (2015)Silver et al. (2016),: The citations are a mess.[citations-NEG], [CMP-NEG] | citations | null | null | null | null | null | CMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
165 | and is hardly reusable -> and are hardly reusable.[null], [PNF-NEG] | null | null | null | null | null | null | PNF | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
166 | Skill composition is the idea of constructing new skills with existing skills ( -> Skill composition is the idea of constructing new skills out of existing skills (. to synthesis -> to synthesize.[null], [PNF-NEG] | null | null | null | null | null | null | PNF | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
167 | set of skills are -> set of skills is. automatons -> automata.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
168 | with low-level controllers can -> with low-level controllers that can.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
169 | the options policy u03c0 o is followed until u03b2(s) > threshold: I don't think that's how options were originally defined... beta is generally defined as a termination probability.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
170 | The translation from TLTL formula FSA to -> The translation from TLTL formula to FSA?[null], [CLA-NEU] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
171 | four automaton states Qu03c6 {q0, qf , trap}: Is it three or four?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
172 | learn a policy that satisfy -> learn a policy that satisfies.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
173 | HRL, We introduce the FSA augmented MDP -> HRL, we introduce the FSA augmented MDP..[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
174 | multiple options policy separately -> multiple options policies separately?[null], [CLA-NEU] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
175 | Given flat policies u03c0u03c61 and u03c0u03c62 that satisfies -> Given flat policies u03c0u03c61 and u03c0u03c62 that satisfy .[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
176 | s illustrated in Figure 3 . -> s illustrated in Figure 2 .?[Figure-NEU], [CLA-NEU] | Figure | null | null | null | null | null | CLA | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
177 | , we cam simply -> , we can simply.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
178 | Figure 4 <newline> . -> Figure 4.[Figure-NEU], [CLA-NEU] | Figure | null | null | null | null | null | CLA | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
179 | . , disagreement emerge -> , disagreements emerge?[null], [CLA-NEU] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
180 | The paper needs to include SOME definition of robustness, even if it just informal.[paper-NEU], [SUB-NEU] | paper | null | null | null | null | null | SUB | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
181 | As it stands, it's not even clear if larger values are better or worse.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
182 | (It would seem that *more* robustness is better than less, but the text says that lower values are chosen.)[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
183 | with 2 hidden layers each of 64 relu: Missing word?Or maybe a comma?[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
184 | to aligns with -> to align with.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
185 | a set of quadratic distance function -> a set of quadratic distance functions.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
186 | satisfies task the specification) -> satisfies the task specification).[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
187 | Figure 4: Tasks 6 and 7 should be defined in the text someplace.[Figure-NEU, Tasks-NEU], [CLA-NEU, SUB-NEU] | Figure | Tasks | null | null | null | null | CLA | SUB | null | null | null | NEU | NEU | null | null | null | null | NEU | NEU | null | null | null |
188 | current frame work i -> current framework i.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
189 | and choose to follow -> and chooses to follow.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
190 | this makes -> making.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
191 | each subpolicies -> each subpolicy. [null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
194 | The method uses a learnable character embedding to transform the data, but is an end-to-end approach[method-NEU, approach-NEU], [EMP-NEU] | method | approach | null | null | null | null | EMP | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
195 | . The analysis of squared error for the price regression shows a clear advantage of the method over previous models that used hand crafted features.[method-POS, previous models-POS], [CMP-POS, EMP-POS] | method | previous models | null | null | null | null | CMP | EMP | null | null | null | POS | POS | null | null | null | null | POS | POS | null | null | null |
196 | Here are my concerns: 1) As the price shows a high skewness in Fig. 1, it may make more sense to use relative difference instead of absolute difference of predicted and actual auction price in evaluating/training each model.[Fig-NEU], [EMP-NEU] | Fig | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
197 | That is, making an error of $100 for a plate that is priced $1000 has a huge difference in meaning to that for a plate priced as $10,000.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
198 | 2) The time-series data seems to have a temporal trend which makes retraining beneficial as suggested by authors in section 7.2.[section-POS], [EMP-POS] | section | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
199 | If so, the evaluation setting of dividing data into three *random* sets of training, validation, and test, in 5.3 doesn't seem to be the right and most appropriate choice.[setting-NEG], [EMP-NEG] | setting | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
200 | It should however, be divided into sets corresponding to non-overlapping time intervals to avoid the model use of temporal information in making the prediction.[prediction-NEU], [EMP-NEU]] | prediction | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
203 | Experiments are performed on an simple domain which nicely demonstrates its properties, as well as on continuous control problems, where the technique outperforms or is competitive with DDPG.[Experiments-POS], [EMP-POS] | Experiments | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
204 | The paper is very clearly written and easy to read, and its contributions are easy to extract.[paper-POS, contributions-POS], [CLA-POS] | paper | contributions | null | null | null | null | CLA | null | null | null | null | POS | POS | null | null | null | null | POS | null | null | null | null |
205 | The appendix is quite necessary for the understanding of this paper, as all proofs do not fit in the main paper.[appendix-NEU, paper-NEU], [PNF-NEU] | appendix | paper | null | null | null | null | PNF | null | null | null | null | NEU | NEU | null | null | null | null | NEU | null | null | null | null |
206 | The inclusion of proof summaries in the main text would strengthen this aspect of the paper.[summaries-NEU, main text-NEU, paper-NEU], [EMP-NEU] | summaries | main text | paper | null | null | null | EMP | null | null | null | null | NEU | NEU | NEU | null | null | null | NEU | null | null | null | null |
207 | On the negative side, the paper fails to make a strong case for significant impact of this work; the solution to this, of course, is not overselling benefits, but instead having more to say about the approach or finding how to produce much better experimental results than the comparative techniques.[paper-NEG, benefits-NEG, approach-NEG, experimental results-NEG], [SUB-NEU, IMP-NEG] | paper | benefits | approach | experimental results | null | null | SUB | IMP | null | null | null | NEG | NEG | NEG | NEG | null | null | NEU | NEG | null | null | null |
208 | In other words, the slightly more stable optimization and slightly smaller hyperparameter search for this approach is unlikely to result in a large impact.[approach-NEG], [IMP-NEG] | approach | null | null | null | null | null | IMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
209 | Overall, however, I found the paper interesting, readable, and the technique worth thinking about, so I recommend its acceptance.[paper-POS, technique-POS], [CLA-POS, REC-POS, EMP-POS] | paper | technique | null | null | null | null | CLA | REC | EMP | null | null | POS | POS | null | null | null | null | POS | POS | POS | null | null |
214 | The paper seems to have weaknesses pertaining to the approach taken, clarity of presentation and comparison to baselines which mean that the paper does not seem to meet the acceptance threshold for ICLR.[paper-NEG], [APR-NEG, PNF-NEG] | paper | null | null | null | null | null | APR | PNF | null | null | null | NEG | null | null | null | null | null | NEG | NEG | null | null | null |
216 | **Strengths** I like the high-level motivation of the work, that one needs to understand and establish that language or semantics can help learn better representations for images. [motivation-POS], [EMP-POS] | motivation | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
217 | I buy the premise and think the work addresses an important issue.[issue-POS], [IMP-POS] | issue | null | null | null | null | null | IMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
218 | **Weakness** Approach: * A major limitation of the model seems to be that one needs access to both images and attribute vectors at inference time to compute representations which is a highly restrictive assumption (since inference networks are discriminative).[model-NEG], [EMP-NEG] | model | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
221 | Clarity: * Eqn. 5, LHS can be written more clearly as hat{a}_k.[Eqn-NEG], [CLA-NEG] | Eqn | null | null | null | null | null | CLA | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
222 | * It would also be good to cite the following related work, which closely ties into the model of Eslami 2016, and is prior work: Efficient inference in occlusion-aware generative models of images, Jonathan Huang, Kevin Murphy.[related work-NEU], [SUB-NEU] | related work | null | null | null | null | null | SUB | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
225 | This is not natural language, firstly because the language in the dataset is synthetically generated and not "natural".[dataset-NEG], [EMP-NEG] | dataset | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
226 | Secondly, the approach parses this "synthetic" language into structured tuples which makes it even less natural.[approach-NEG], [EMP-NEG] | approach | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
227 | Also, Page. 3. What does "partial descriptions" mean?[Page-NEU], [EMP-NEU] | Page | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
228 | * Section 3: It would be good to explicitly draw out the graphical model for the proposed approach and clarify how it differs from prior work (Eslami, 2016).[proposed approach-NEU], [CMP-NEU] | proposed approach | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
229 | * Sec. 3. 4 mentions that the "only image" encoder is used to obtain the representation for the image, but the "only image" encoder is expected to capture the "indescribable component" from the image, then how is the attribute information from the image captured in this framework?[Sec-NEU], [EMP-NEU] | Sec | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
231 | In general, the writing and presentation of the model seem highly fragmented, and it is not clear what the specifics of the overall model are.[writing-NEG, presentation-NEG, model-NEG], [CLA-NEG, PNF-NEG] | writing | presentation | model | null | null | null | CLA | PNF | null | null | null | NEG | NEG | NEG | null | null | null | NEG | NEG | null | null | null |
232 | For instance, in the decoder, the paper mentions for the first time that there are variables "z", but does not mention in the encoder how the variables "z" were obtained in the first place (Sec. 3.1).[null], [EMP-NEG] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
234 | at every timestep which is used in a similar manner to Eqn. 2 in Eslami, 2016.[null], [CMP-NEG] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
235 | Sec. 3.4 "GEN Image Encoder" has some typo, it is not clear what the conditioning is within q(z) term.[null], [CLA-NEG] | null | null | null | null | null | null | CLA | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |
237 | This seems like an important baseline to report for the image caption ranking task.[baseline-NEU], [CMP-NEU] | baseline | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
238 | 2. Another crucial baseline is to train the Attend, Infer, Repeat model on the ShapeWorld images, and then take the latent state inferred at every step by that model, and use those features instead of the features described in Sec. 3.4[baseline-NEU], [CMP-NEU] | baseline | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
239 | "Gen Image Encoder" and repeat the rest of the proposed pipeline.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
240 | Does the proposed approach still show gains over Attend Infer Repeat?[proposed approach-NEU], [EMP-NEU] | proposed approach | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
241 | 3. The results shown in Fig. 7 are surprising -- in general, it does not seem like a regular VAE would do so poorly.[results-NEG], [EMP-NEG] | results | null | null | null | null | null | EMP | null | null | null | null | NEG | null | null | null | null | null | NEG | null | null | null | null |
242 | Are the number of parameters in the proposed approach and the baseline VAE similar? [proposed approach-NEU], [EMP-NEU] | proposed approach | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
243 | Are the choices of decoder etc. similar?[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
244 | Did the model used for drawing Fig. 7 converge?[model-NEU], [EMP-NEU] | model | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
245 | Would be good to provide its training curve.[null], [EMP-NEU] | null | null | null | null | null | null | EMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
246 | Also, it would be good to evaluate the AIR model from Eslami, 2016 on the same simple shapes dataset and show unconditional samples.[null], [CMP-NEU] | null | null | null | null | null | null | CMP | null | null | null | null | null | null | null | null | null | null | NEU | null | null | null | null |
247 | If the claim from the work is true, that model should be just as bad as a regular VAE and would clearly establish that using language is helping get better image samples.[model-NEU], [EMP-NEU] | model | null | null | null | null | null | EMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
248 | * Page 2: In general the notion of separating the latent space into content and style, where we have labels for the "content" is an old idea that has appeared in the literature and should be cited accordingly.[literature-NEU], [CMP-NEU] | literature | null | null | null | null | null | CMP | null | null | null | null | NEU | null | null | null | null | null | NEU | null | null | null | null |
260 | The authors compared Dauto with several baseline methods on several datasets and showed improvement.[baseline methods-POS, datasets-POS], [CMP-POS, EMP-POS] | baseline methods | datasets | null | null | null | null | CMP | EMP | null | null | null | POS | POS | null | null | null | null | POS | POS | null | null | null |
261 | The paper is well-organized and easy to follow.[paper-POS], [CLA-POS] | paper | null | null | null | null | null | CLA | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
262 | The probabilistic framework itself is quite straight-forward.[framework-POS], [EMP-POS] | framework | null | null | null | null | null | EMP | null | null | null | null | POS | null | null | null | null | null | POS | null | null | null | null |
263 | The paper will be more interesting if the authors are able to extend the discussion on different forms of prior instead of the simple parameter sharing scheme.[paper-NEU, discussion-NEG], [SUB-NEG] | paper | discussion | null | null | null | null | SUB | null | null | null | null | NEU | NEG | null | null | null | null | NEG | null | null | null | null |
266 | It would be interesting to see if the additional auto-encoder part help address the issue.[null], [SUB-NEG] | null | null | null | null | null | null | SUB | null | null | null | null | null | null | null | null | null | null | NEG | null | null | null | null |