paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
10.2k
point
stringlengths
45
643
NIPS_2020_1762
NIPS_2020
I have some concerns about the paper: 1. Why focal loss is used in regression tasks? Focal loss is famous for doing class imbalance problem. It has lower gradients on easy samples, which is a good property for classification. But for regressing the IoU, lower weight for easy samples may cause inaccurate problem. This paper gives me a feeling that the authors only want to have a unified form, but didn't consider the difference between the classification and regression tasks. 2. In [1], the predicted variance of bbox parameters is used for NMS. The algorithm in this paper also produce bbox confidence (sum of two neighbour probabilities). Could it benefit the NMS? 3. The DFL is very similar with softargmax which is widely used in keypoint detection. However, the citation of this research topic is lacking. Please give some credit to authors of keypoint detection papers such as [2] and more. A problem of the softargmax is that the gradient imbalance problem. The form of softargmax is \sum p(x)x. The gradient for p(x) with x=10 is 10 times bigger than the gradient at x=1. This means that the DFL puts more weight on big objects, while the difficult problem of detection is usually the small objects. Overall, the idea of this paper is good but some of the details are still coarse. [1] He Y, Zhang X, Savvides M, et al. Softer-nms: Rethinking bounding box regression for accurate object detection[J]. arXiv preprint arXiv:1809.08545, 2018, 2. [2] Nibali A, He Z, Morgan S, et al. Numerical coordinate regression with convolutional neural networks[J]. arXiv preprint arXiv:1801.07372, 2018.
1. Why focal loss is used in regression tasks? Focal loss is famous for doing class imbalance problem. It has lower gradients on easy samples, which is a good property for classification. But for regressing the IoU, lower weight for easy samples may cause inaccurate problem. This paper gives me a feeling that the authors only want to have a unified form, but didn't consider the difference between the classification and regression tasks.
NIPS_2021_311
NIPS_2021
- The paper leaves some natural questions open (see questions below). - Line 170 mentions that the corpus residual can be used to detect an unsuitable corpus, but there are no experiments to support this. After authors' response All the weakness points have been addressed by the authors' response. Consequently I have raised my score. In particular: The left open questions have all been answered. There indeed is an experiment to support this, thanks to the authors' for clarifying this, that connection was not clear to me previously. Questions - Line 60: Why do you say that e.g. influence functions cannot be used to explain a prediction? The explanation of a prediction could be the training examples whose removal (as determined by the influence function) would lead to the largest score drop for a prediction. - How does the method scale as the corpus size or hidden dimension size is increased? - What happens if a too small corpus is chosen? Can this be detected? - What if we don’t know that a test example is crucially different, e.g. what if we don’t know that the patient of Figure 8 is “British” and we use the American corpus to explain it? Can this be detected with the corpus residual value? - In the supplementary material you mention how it is possible to check if a decomposition is unique. Do you do this in practice when conducting experiments? How do you choose a decomposition if it is not unique? What does it imply for the experiments (and the usage of the method in real-world applications) if the decomposition is not unique? Typos, representation etc. - Line 50: An example of when a prototype model would be unsuitable would strengthen your argument. - Footnote 2: “or” -> “of” - Line 191: when the baseline is first introduced, [10] or other references would be helpful to support this approach - Line 319: “the the” -> “the” - Line 380: “at” -> “to”? A broader impact section could be added. In a separate section (e.g. supplementary material), there could be an explicit discussion on when the method should not be used, e.g. as shown in Figure 8, the American corpus shouldn’t be used to explain the British patient. Also see last question above – what if we don’t know that the patient is British? Can this be detected? This should also be discussed in such a section.
- How does the method scale as the corpus size or hidden dimension size is increased?
NIPS_2017_486
NIPS_2017
1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information – which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction. 2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant. 3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback? 4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by “as an example”? 5. L216-217: What is the rationale behind using cross entropy for first (P – floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases? 6. L222: Why is the official test set of MSCOCO not used for reporting results? 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? 8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo? 9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research? 10. Other errors/typos: a. L190: complete -> completed b. L201, “We use either … feedback collection”: incorrect phrasing c. L218: multiply -> multiple d. L235: drop “by” Post-rebuttal comments: I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that. So, I would like to change my rating to marginally below acceptance threshold.
2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant.
NIPS_2018_288
NIPS_2018
. Given bellow is a list of remarks regarding these weaknesses and requests for clarifications and updates to the manuscript. - The algorithm’s O(1/(\esiplon^3 (1-\gamma)^7)) complexity is extremely high. Of course, this is not practical. Notice that as opposed to the nice recovery time O(\epsilon^{-(d+3)}) result, which is almost tight, the above complexity stems from the algorithm’s design. - Part of the intractability of the algorithm comes from the requirement of full coverage of all ball-action pairs, per each iteration. This issue is magnified by the fact that the NN effective distance, h^*, is O(\epsilon (1-\gamma)). This implies a huge discretized state set, which adds up to the above problematic complexity. The authors mention (though vaguely) that the analysis is probably loose. I wonder how much of the complexity issues originate from the analysis itself, and how much from the algorithm’s design. - In continuation to the above remark, what do you think can be done (i.e. what minimal assumptions are needed) to relax the need of visiting all ball-action pairs with each iteration? Alternatively, what would happen if you partially cover them? - Table 1 lacks two recent works [1,2] (see below) that analyze the sample complexity of parametrized TD-learning algorithms that has all ‘yes’ values in the columns except for the ‘single sample path’ column. Please update accordingly. - From personal experience, I believe the Lipschitz assumption is crucial to have any guarantees. Also, this is a non-trivial assumption. Please stress it further in the introduction, and/or perhaps in the abstract itself. - There is another work [3] that should also definitely be cited. Please also explain how your work differs from it. - It is written in the abstract and in at least one more location that your O(\epsilon^{-(d+3)}) complexity is tight, but you mention a lower bound which differs by a factor of \epsilon^{-1}. So this is not really tight, right? If so, please rephrase. - p.5, l.181: ‘of’ is written twice. p.7, l.267: ‘the’ is written twice. References: [1] Finite Sample Analyses for TD (0) with Function Approximation, G Dalal, B Szörényi, G Thoppe, S Mannor, AAAI 2018 [2] Finite Sample Analysis of Two-Timescale Stochastic Approximation with Applications to Reinforcement Learning G Dalal, B Szörényi, G Thoppe, S Mannor, COLT 2018 [3] Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories, Raphael Fonteneau, Susan A. Murphy, Louis Wehenkel, and Damien Ernst, Annals of Operations Research 2013
- In continuation to the above remark, what do you think can be done (i.e. what minimal assumptions are needed) to relax the need of visiting all ball-action pairs with each iteration? Alternatively, what would happen if you partially cover them?
ARR_2022_237_review
ARR_2022
of the paper include: - The introduction of relation embeddings for relation extraction is not new, for example look at all Knowledge graph completion approaches that explicitly model relation embeddings or works on distantly supervised relation extraction. However, an interesting experiment would be to show the impact that such embeddings can have by comparing with a simple baseline that does not take advantage of those. - Improvements are incremental across datasets, with the exception of WebNLG. Why mean and standard deviation are not shown for the test set of DocRED? - It is not clear if the benefit of the method is just performance-wise. Could this particular alignment of entity and relation embeddings (that gives the most in performance) offer some interpretability? ( perhaps this could be shown with a t-SNE plot, i.e. check that their embeddings are close in space). Comments/Suggestions: - Lines 26-27: Multiple entities typically exist in both sentences and documents and this is the case even for relation classification, not only document-level RE or joint entity and relation extraction. - Lines 39-42: Point to figure 1 for this particular example. - Lines 97-98: Rephrase the sentence "one that searches for ... objects" as it is currently confusing - Line 181, Equations 4: $H^s$, $E^s$, $E^o$, etc are never explained. - Could you show ablations on EPO and SEO? You mention in the Appendix that the proposed method is able to solve all those cases but you don't show if your method is better than others. - It would be interesting to also show how the method performs when different number of triples reside in the input sequence. Would the method help more sequences with more triples? Questions: - Improvement still be observed with a better encoder, e.g. RoBERTa-base, instead of BERT? - How many seeds did you use to report mean and stdev on the development set? - For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript.
- Improvement still be observed with a better encoder, e.g. RoBERTa-base, instead of BERT?
pHwLbEkB0J
EMNLP_2023
Overall I feel the paper is good -- clearly written and the proposed method is intuitive. A few places that can be further improved include: 1. More datasets on traditional multilingual tasks like XNLI, XTREME, to show the proposed technique can generalize to tasks with different levels of reasoning requirements. 2. Consider adding one experiment on open-source LLM as the current GPT-series and PaLM v1 is somewhat hard to reproduce entirely from the outsider. 3. Small typo around line 90 -- "Let’s resolver the task ..." to "Let's resolve the task"
1. More datasets on traditional multilingual tasks like XNLI, XTREME, to show the proposed technique can generalize to tasks with different levels of reasoning requirements.
YHqEWF5gt8
ICLR_2024
1) The choice of the baseline methods can be improved. Especially to evaluate the appearance decomposition part, it would be good to compare to other existing methods, as an example Ref-NeRF would be a good baseline that contains appearance decomposition. For the larger outdoor scene, MipNerf would be a good baseline. 2) More details on the training, the data and the results of the ray rectification transformer should be provided including results on the ray density profile. I’d suggest adding information about the training data, an example of successful ray clean on ray density plots as well as final rendered images with and without ray cleaning. As a reviewer it is hard to judge the impact of this part with the given information, please provide more evidence. 3) Figure 6 caption does not fit the content. 4) It would be good to have a quantitative evaluation for the Shiny Objects dataset to support the appearance decomposition.
1) The choice of the baseline methods can be improved. Especially to evaluate the appearance decomposition part, it would be good to compare to other existing methods, as an example Ref-NeRF would be a good baseline that contains appearance decomposition. For the larger outdoor scene, MipNerf would be a good baseline.
NIPS_2019_306
NIPS_2019
Weakness: 1. There are many limitations of the proposed method. The proposed method assumes that the causal graphical is given. Also, the values must be discrete. 2. It would be good to show how to use the proposed method to achieve fair policy learning without "severely damaging the performance of predictive model". 3. It would be great to discuss why the fairness bound achieved by the proposed method is tighter compared with previous methods. Minor issues: line 17 irrespective their -> irrespective of their line 240 to find -> to finding Should the the numbers in Table 3 CE #of o 4 be bold? The bound of the proposed method is tighter than previous methods.
2. It would be good to show how to use the proposed method to achieve fair policy learning without "severely damaging the performance of predictive model".
ARR_2022_291_review
ARR_2022
- The biggest concerns/confusions during the reading are the lack of implementation details of the proposed methods. They should have been described in the implementation details in Section 4.1. 1) For the interpolation method, how the \lambda has been set? 2) For the dropout, thru the reading of the response letter, my understanding is that multiple stochastic masks (w/ 0 and 1) are applied to a document presentation from an encoder. Herein, what is the dropping rate? How many masks have been generated? - I am not sure TQA is a good enough benchmark for the proposed method. It can be seen from Table 1, none of the competing deep models could outperform BM25. Though the proposed DAR could boost DPR, I am not sure if such gain is meaningful. - The proposed methods are novel and intriguing. The confusion mostly comes from the implementation details and the results. Why not extend the efforts to a full paper and properly add all of the details given the extra space?
- The biggest concerns/confusions during the reading are the lack of implementation details of the proposed methods. They should have been described in the implementation details in Section 4.1.
NIPS_2020_639
NIPS_2020
The relevance of this paper is entirely unclear, for multiple reasons: 1. The author themselves state "This work does not present any foreseeable societal consequence.", raising the question why we should we care about this work in the first place. 2. They don't make any detectable effort towards arguing for why their work is relevant in the paper either, rendering it a purely theoretical exercise. 3. No empirical evaluation whatsoever is provided, there is no comparison (except for on an abstract level) with other methods. It is completely unclear what the practical value of the contribution even could be. Even a theoretical paper should at least try to argue for why it matters, this is not the case with this submission. The theoretical contributions may well be significant and valuable, however, in its current form this paper is not suitable for a publication at NeurIPS.
3. No empirical evaluation whatsoever is provided, there is no comparison (except for on an abstract level) with other methods. It is completely unclear what the practical value of the contribution even could be. Even a theoretical paper should at least try to argue for why it matters, this is not the case with this submission. The theoretical contributions may well be significant and valuable, however, in its current form this paper is not suitable for a publication at NeurIPS.
NIPS_2021_1759
NIPS_2021
The extension from the EH model is natural. In addition, there has been literature that proves the power of FNN from a theoretical point of view, whereas this paper fails to make a review in this regard. Among other works, Schmidt-Hieber (2020) gave an exact upper bound of the approximation error for FNNs involving the least-square loss. Since the DeepEH optimizes a likelihood-based loss, this paper builds up its asymptotic properties by following assumptions and proofs of Theorems 1 and 2 in Schmidt-Hieber (2020) as well as theories on empirical processes. Additional Feedback: 1) In the manuscript, P mostly represents a probability but sometimes for a cumulative distribution function (e.g., Eqs. (3) and (4) and L44, all in Appendix), which leads to confusion. 2). The notation K is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). 3). What is K b in estimating baseline hazard (L172)?
1) In the manuscript, P mostly represents a probability but sometimes for a cumulative distribution function (e.g., Eqs. (3) and (4) and L44, all in Appendix), which leads to confusion.
ICLR_2023_3449
ICLR_2023
1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model. 2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example. 3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.). [1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf
2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example.
NIPS_2019_1366
NIPS_2019
Weakness: - Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks. - The paper has shown that pure RL algorithm (HER) failed to generalize to distance goals but the paper doesn't discuss why it failed and why planning can solve the problem that HER can't solve. Ideally, if the neural networks are large enough and are trained with enough time, Q-Learning should converge to not so bad policy. It will be better if the authors can discuss the advantages of planning over pure Q-learning. - The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning
- Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks.
NIPS_2018_884
NIPS_2018
. It's hard to judge impact in real-world settings when most of the quantitative evaluations are on datasets not representative of complex natural images (e.g. MNIST and NORB). On MNIST, the method shows clear advantages over competing methods. However, even on NORB, where a lot of the deformations can't easily be parameterized, this advantage has turned into being only on par with other leading methods. I think the inclusion of the faces dataset was important for this reason. I was confused for a while what the exact orbit was for each dataset. I kept scanning the text for this. A table of all three datasets and a short note on how orbits were defined and canonical samples selected would make things a lot clearer. Concurrent work. Similar ideas of representation learning through transformation priors have appeared in recent work. I don't think it takes away any novelty from this submission, since judging from the dates these is concurrent works. I just thought I would bring your attention to it: - https://openreview.net/pdf?id=S1v4N2l0- (ICLR 2018) - https://arxiv.org/pdf/1804.01552.pdf (CVPR 2018) Minor comments. - eq. 6: what connects the orbit with this loss? I don't see the connection just yet - eq. 7: "x_q not in Oxq!=Oxi" What is this notation "set1 != set2" that seems to imply it forms another set (and not a true/false value) line 136: Oxq \not= Oxi, again, I'm not sure about this notation. I understand what it means, but it looks odd to me. I have never seen this as part of set notation before. - eq. 8: where is x_p, x_q, x_c coming from? Shouldn't the summand be $(x_i, x_p, x_q) \in \mathcal{T}$? The canonical sample x_c is still unclear where it comes from. If x_c is the canonical instance for each orbit, then it also changes in the summation. This is not clear from the notation. - line 196: max unpooling transfers the argmax knowledge of maxpooling to the decoder. Do you use this behavior too? - Table 1: Should EX/NORB really be bold-faced? Is the diff between 0.59+/-0.12 really statistically significant from 0.58+/-0.11? - line 213: are all feature spaces well-suited for 1-NN? If a feature space is not close to a spherical Gaussian, it may perform poorly. If feature dimensions are individually standardized, it would avoid this issue. - It was a bit unclear how canonical samples were constructed on the face dataset ("least yaw displacement from a frontal pose"). This seems to require a lot of priors on faces and does not seem like purely unsupervised learning. Did the other competing methods require canonical examples to be designated?
- line 213: are all feature spaces well-suited for 1-NN? If a feature space is not close to a spherical Gaussian, it may perform poorly. If feature dimensions are individually standardized, it would avoid this issue.
wE8wJXgI9T
ICLR_2025
- **Unclear Definition**: The proposed *contrastive gap* lies at the core of this work; however, it has never been defined clearly. While an intuitive example on the "idealized" dataset was given to demonstrate the concept, the setting of this example is less convincing (see the detailed comments below), and a clear, formal definition for the contrastive gap is still lacking. - **Contrastive Gap v.s. Margin**: The experiment settings of `Table 1` and `Section 3.2` are unconvincing in investigating the modality gap and somewhat confused with the margin in a single image modality. As shown in Eq (2), the CLIP contrastive loss may work as a triplet loss to push the margin between positive and negative pairs. Thus, by using the same modality on both encoders, the learned contrastive gap is more like a margin among different samples instead of showing the modality gap. - **Lack of Technical Novelty**: The key contribution of this work is relatively marginal. First, the proposed contrastive gap lacks insightful theoretical evidence and guarantees. Second, the proposed mitigation strategies all build on top of existing works. - **Experiments**: While the new proposed fine-tuning loss shows some improvements, the CLIP embeddings badly degrade in retrieval, raising a severe concern about the utility of closing the "contrastive gap".
- **Unclear Definition**: The proposed *contrastive gap* lies at the core of this work; however, it has never been defined clearly. While an intuitive example on the "idealized" dataset was given to demonstrate the concept, the setting of this example is less convincing (see the detailed comments below), and a clear, formal definition for the contrastive gap is still lacking.
NIPS_2018_125
NIPS_2018
- Some missing references and somewhat weak baseline comparisons (see below) - Writing style needs some improvement, although, it is overall well written and easy to understand. Technical comments and questions: - The idea of active feature acquisition, especially in the medical domain was studied early on by Ashish Kapoor and Eric Horvitz. See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/NIPS2009.pdf There is also a number of missing citations to work on using MDPs for acquiring information from external sources. Kanani et al, WSDM 2012, Narsimhan et al, "Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning", and others. - section 3, line 131: "hyperparameter balancing the relative importances of two terms is absorbed in the predefined cost". How is this done? The predefined cost could be externally defined, so it's not clear how these two things interact. - section 3.1, line 143" "Then the state changes and environment gives a reward". This is not true of standard MDP formulations. You may not get a reward after each action, but this makes it sound like that. Also, line 154, it's not clear if each action is a single feature or the power set. Maybe make the description more clear. - The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues: - Line 207: I didn't find the pseudo code in the supplementary material - The results are somewhat difficult to read. It would be nice to have a more cleaner representation of results in figures 1 and 2. - Line 289: You should still include results of DWSC if it's a reasonable baseline - Line 319: your dollar numbers in the table don't match! - The paper will become more readable by fixing simple style issues like excessive use of "the" (I personally still struggle with this problem), or other grammar issues. I'll try and list most of the fixes here. 4: joint 29: only noise 47: It is worth noting that 48: pre-training is unrealistic 50: optimal learning policy 69: we cannot guarantee 70: manners meaning that => manner, that is, 86: work 123: for all data points 145: we construct an MDP (hopefully, it will be proper, so no need to mention that) 154: we assume that 174: learning is a value-based 175: from experience. To handle continuous state space, we use deep-Q learning (remove three the's) 176: has shown 180: instead of basic Q-learning 184: understood as multi-task learning 186: aim to optimize a single 208: We follow the n-step 231: versatility (?), we perform extensive 233: we use Adam optimizer 242: We assume uniform acquisition cost 245: LSTM 289: not only feature acquisition but also classification. 310: datasets 316: examination cost?
- section 3.1, line 143" "Then the state changes and environment gives a reward". This is not true of standard MDP formulations. You may not get a reward after each action, but this makes it sound like that. Also, line 154, it's not clear if each action is a single feature or the power set. Maybe make the description more clear.
NIPS_2021_1604
NIPS_2021
). Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6]. After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised my score. All unclear parts have been answered The authors' explained why the chosen baseline makes the most sense. It would be great if this is added to the final version of the paper. Questions - Do you think there is a way to test beforehand whether I(X_1, Y_1) would be lowered more than I(X_2, Y_1) beforehand? - Out of curiosity, did you consider first using Aug and then CF.CDA? Especially for the correlated palate result it could be interesting to see if now CF.CDA can improve. - Did both CDA and MMI have the same lambda_RL (Eq 9) value? From Figure 6 it seems the biggest difference between CDA and MMI is that MMI has more discontinuous phrase/tokens. Typos, representation etc. - Line 69: Is X_2 defined as all features of X not in X_1? Stating this explicitly would be great. - Line 88: What ideas exactly do you take from [19] and how does your approach differ? - Eq 2: Does this mean Y is a value in [0, 1] for two possible labels? Can this be extended to more labels? This should be clarified. - 262: What are the possible Y values for TripAdvisor’s location aspect? - The definitions and usage of the various variables is sometimes difficult to follow. E.g. What exactly is the definition of X_2? (see also first point above). When does X_M become X_1? Sometimes the augmented data has a superscript, sometimes it does not. In line 131 the meaning of x_1 and x_2 are reverse, which can get confusing - maybe x’_1 and x’_2 would make it easier to follow together with a table that explains the meaning of different variables? - Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it. Minor points - Line 29: How is desired relationship between input text and target labels defined? - Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done. - Line 14, 47: A brief explanation of “multi-aspect” would be helpful - Figure 1: Subscripts s and t should be 1 and 2? - 184: Delete “the” There is a broader impact section which discusses the limitations and dangers adequately.
- Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6]. After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised my score. All unclear parts have been answered The authors' explained why the chosen baseline makes the most sense. It would be great if this is added to the final version of the paper. Questions - Do you think there is a way to test beforehand whether I(X_1,
NIPS_2017_201
NIPS_2017
++++++++++ Novelty/Significance: The reformulation of the robust regression problem (Eq 6 in the paper) shows that robust regression is reducible to standard k-sparse recovery. Therefore, the proposed CRR algorithm is basically the well-known IHT algorithm (with a modified design matrix), and IHT has been (re)introduced far too many times in the literature to count. The proofs in the appendix seem to be correct, but also mostly follow existing approaches for analyzing IHT (see my comment below). Note that the “subset strong convexity” property (or at least a variation of this property) of random Gaussian matrices seems to have appeared before in the sparse recovery literature; see “A Simple Proof that Random Matrices are Democratic” (2009) by Davenport et al. Couple of questions: - What is \delta in the statement of Lemma 5? - Not entirely clear to me why one would need a 2-stage analysis procedure since the algorithm does not change. Some intuition in the main paper explaining this would be good (and if this two-stage analysis is indeed necessary, then it would add to the novelty of the paper). +++++++++ Update after authors' response +++++++++ Thanks for clarifying some of my questions. I took a closer look at the appendix, and indeed the "fine convergence" analysis of their method is interesting (and quite different from other similar analyses of IHT-style methods). Therefore, I have raised my score.
- What is \delta in the statement of Lemma 5?
NIPS_2019_1408
NIPS_2019
- The paper is not that original given the amount of work in learning multimodal generative models: — For example, from the perspective of the model, the paper builds on top of the work by Wu and Goodman (2018) except that they learn a mixture of experts rather than a product of experts variational posterior. — In addition, from the perspective of the 4 desirable attributes for multimodal learning that the authors mention in the introduction, it seems very similar to the motivation in the paper by Tsai et al. Learning Factorized Multimodal Representations, ICLR 2019, which also proposed a multimodal factorized deep generative model that performs well for discriminative and generative tasks as well as in the presence of missing modalities. The authors should have cited and compared with this paper. ****************************Quality**************************** Strengths: - The experimental results are nice. The paper claims that their MMVAE modal fulfills all four criteria including (1) latent variables that decompose into shared and private subspaces, (2) be able to generate data across all modalities, (3) be able to generate data across individual modalities, and (4) improve discriminative performance in each modality by leveraging related data from other modalities. Let's look at each of these 4 in detail: — (1) Yes, their model does indeed learn factorized variables which can be shown by good conditional generation on MNIST+SVHN dataset. — (2) Yes, joint generation (which I assume to mean generation from a single modality) is performed on vision -> vision and language -> language for CUB, — (3) Yes, conditional generation can be performed on CUB via language -> vision and vice versa. Weaknesses: - (continuing on whether the model does indeed achieve the 4 properties that the authors describe) — (3 continued) However, it is unclear how significant the performance is for both 2) and 3) since the authors report no comparisons with existing generative models, even simple ones such as a conditional VAE from language to vision. In other words, what if I forgo with the complicated MoE VAE, and all the components of the proposed model, and simply use a conditional VAE from language to vision. There are many ablation studies that are missing from the paper especially since the model is so complicated. — (4) The authors have not seemed to perform extensive experiments for this criteria since they only report the performance of a simple linear classifier on top of the latent variables. There has been much work in learning discriminative models for multimodal data involving aligning or fusing language and vision spaces. Just to name a few involving language and vision: - Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, EMNLP 2016 - DeViSE: A Deep Visual-Semantic Embedding Model, NeurIPS 2013 Therefore, it is important to justify why I should use this MMVAE model when there is a lot of existing work on fusing multimodal data for prediction. ****************************Clarity**************************** Strengths: - The paper is generally clear. I particularly liked the introduction of the paper especially motivation Figures 1 and 2. Figure 2 is particularly informative given what we know about multimodal data and multimodal information. - The table in Figure 2 nicely summarizes some of the existing works in multimodal learning and whether they fulfill the 4 criteria that the authors have pointed out to be important. Weaknesses: - Given the authors' great job in setting up the paper via Figure 1, Figure 2, and the introduction, I was rather disappointed that section 2 did not continue on this clear flow. To begin, a model diagram/schematic at the beginning of section 2 would have helped a lot. Ideally, such a model diagram could closely resemble Figure 2 where you have already set up a nice 'Venn Diagram' of multimodal information. Given this, your model basically assigns latent variables to each of the information overlapping spaces as well as arrows (neural network layers) as the inference and generation path from the variables to observed data. Showing such a detailed model diagram in an 'expanded' or 'more detailed' version of Figure 2 would be extremely helpful in understanding the notation (which there are a lot), how MMVAE accomplishes all 4 properties, as well as the inference and generation paths in MMVAE. - Unfortunately, the table in Figure 2 it is not super complete given the amount of work that has been done in latent factorization (e.g. Learning Factorized Multimodal Representations, ICLR 2019) and purely discriminative multimodal fusion (i.e. point d on synergy) - There are a few typos and stylistic issues: 1. line 18: "Given the lack explicit labels available” -> “Given the lack of explicit labels available” 2. line 19: “can provided important” -> “can provide important” 3. line 25: “between (Yildirim, 2014) them” -> “between them (Yildirim, 2014)” 4. and so on… ****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite interesting. - The paper did a commendable job in attempting to perform experiments to justify the 4 properties they outlined in the introduction. - I can see future practitioners using the variational MoE layers for encoding multimodal data, especially when there is missing multimodal data. Weaknesses: - That being said, there are some important concerns especially regarding the utility of the model as compared to existing work. In particular, there are some statements in the model description where it would be nice to have some experimental results in order to convince the reader that this model compares favorably with existing work: 1. line 113: You set \alpha_m uniformly to be 1/M which implies that the contributions from all modalities are the same. However, works in multimodal fusion have shown that dynamically weighting the modalities is quite important because 1) modalities might contain noise or uncertain information, 2) different modalities contribute differently to the prediction (e.g. in a video when a speaker is not saying anything then their visual behaviors are more indicative than their speech or language behaviors). Recent works therefore study, for example, gated attentions (e.g. Gated-Attention Architectures for Task-Oriented Language Grounding, AAAI 2018 or Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning, ICMI 2017) to learn these weights. How does your model compare to this line of related work, and can your model be modified to take advantage of these fusion methods? 2. line 145-146: "We prefer the IWAE objective over the standard ELBO objective not just for the fact that it estimates a tighter bound, but also for the properties of the posterior when computing the multi-sample estimate." -> Do you have experimental results that back this up? How significant is the difference? 3. line 157-158: "needing M^2 passes over the respective decoders in total" -> Do you have experimental runtimes to show that this is not a significant overhead? The number of modalities is quite small (2 or 3), but when the decoders are large recurrent of deconvolutional layers then this could be costly. ****************************Post Rebuttal**************************** The author response addressed some of my concerns regarding novelty but I am still inclined to keep my score since I do not believe that the paper is substantially improving over (Wu and Goodmann, 2018) and (Tsai et al, 2019). The clarity of writing can be improved in some parts and I hope that the authors would make these changes. Regarding the quality of generation, it is definitely not close to SOTA language models such as GPT-2 but I would still give the authors credit since generation is not their main goal, but rather one of their 4 defined goals to measure the quality of multimodal representation learning.
1. line 113: You set \alpha_m uniformly to be 1/M which implies that the contributions from all modalities are the same. However, works in multimodal fusion have shown that dynamically weighting the modalities is quite important because
ACL_2017_37_review
ACL_2017
Weak results/summary of "side-by-side human" comparison in Section 5. Some disfluency/agrammaticality. - General Discussion: The article proposes a principled means of modeling utterance context, consisting of a sequence of previous utterances. Some minor issues: 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration. 2. Some reader confusion may be eliminated by explicitly defining what "segment" means in "segment level", as occurring on line 269. Previously, on line 129, this seemingly same thing was referred to as "a sequence-sequence [similarity matrix]". The two terms appear to be used interchangeably, but it is not clear what they actually mean, despite the text in section 3.3. It seems the authors may mean "word subsequence" and "word subsequence to word subsequence", where "sub-" implies "not the whole utterance", but not sure. 3. Currently, the variable symbol "n" appears to be used to enumerate words in an utterance (line 306), as well as utterances in a dialogue (line 389). The authors may choose two different letters for these two different purposes, to avoid confusing readers going through their equations. 4. The statement "This indicates that a retrieval based chatbot with SMN can provide a better experience than the state-of-the-art generation model in practice." at the end of section 5 appears to be unsupported. The two approaches referred to are deemed comparable in 555 out of 1000 cases, with the baseline better than the proposed method in 238 our of the remaining 445 cases. The authors are encouraged to assess and present the statistical significance of this comparison. If it is weak, their comparison permits to at best claim that their proposed method is no worse (rather than "better") than the VHRED baseline. 5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text. 6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand. 7. Spelling: "gated recurrent unites"; "respectively" on line 133 should be removed; punctuation on line 186 and 188 is exchanged; "baseline model over" -> "baseline model by"; "one cannot neglects".
6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand.
NIPS_2016_39
NIPS_2016
: One could eventually object that adversarial domain adaptation is not new, and neither are projections into shared and private spaces and orthogonality constraints. However, these are minor points. I still think that the whole package is sufficiently novel even for a high level conference as NIPS. I am also wondering where the exact contribution of the private space actually comes from. The training loss related to the task classifier is unlikely to give any higher performance on the target data (by construction due to the orthogonality constraints). Minor remarks: - In equation (5), I think the loss should be HH^T and not H^T H if orthogonality is supposed to be favored and features are rows. - the task loss is called L_task in the text but L_class in figure 1
- the task loss is called L_task in the text but L_class in figure 1
NIPS_2020_80
NIPS_2020
- It is not clear to me wether the pooling is done sequencially or in parallel. - The unpooling issue of missing nodes was not clear. I'd ask the authors to expand more about that - When pooling, it seems that feature averaging was done. Were any other pooling methods tried? - What are other limitations of the method? in the graph case the network was pretty shallow, is this the case here?
- What are other limitations of the method? in the graph case the network was pretty shallow, is this the case here?
NIPS_2020_1592
NIPS_2020
Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum. 4. This work is claiming that it is the first time that language GANs outperform MLE, while prior works like seqGAN or scratchGAN all claim to be better than MLE. Is this argument based on the tradeoff between BLEU and self-BLEU from "language GANs falling short"? If so, Figure 2 is not making a fair comparison since this work uses T5/BART which is trained on external data, while previous works do not. What if you only use in-domain data? Would this still outperform MLE? Minor concerns: 5. This work only uses answer generation and summarization to evaluate the proposed method. While these are indeed conditional generation tasks, they are close to "open domain" generation rather than "close domain" generation such as machine translation. I think this work would be more convincing if it is also evaluated in machine translation which exhibits much lower uncertainties per word. 6. The discriminator accuracy of ~70% looks low to me, compared to "Real or Fake? Learning to Discriminate Machine from Human Generated Text" which achieves almost 90% accuracy. I wonder if the discriminator was not initialized with a pretrained LM, or is that because the discriminator used is too small? ===post-rebuttal=== The added scratch GAN+pretraining (and coldGAN-pretraining) experiments are fairer, but scratch GAN does not need MLE pretraining while this work does, and we know that MLE pretraining makes a big difference, so I am still not very convinced. My main concern is the existence of so many hyper-parameters/tricks: mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, and MLE pretraining. I think some sensitivity analysis similar to scratch GAN's would be very helpful. In addition, rebuttal Figure 2 is weird: when generating only one word, why would cold GAN already outperform MLE by 10%? To me, this seems to imply that improvement might be due to hyper-parameter tuning.
5. This work only uses answer generation and summarization to evaluate the proposed method. While these are indeed conditional generation tasks, they are close to "open domain" generation rather than "close domain" generation such as machine translation. I think this work would be more convincing if it is also evaluated in machine translation which exhibits much lower uncertainties per word.
ARR_2022_291_review
ARR_2022
- The biggest concerns/confusions during the reading are the lack of implementation details of the proposed methods. They should have been described in the implementation details in Section 4.1. 1) For the interpolation method, how the \lambda has been set? 2) For the dropout, thru the reading of the response letter, my understanding is that multiple stochastic masks (w/ 0 and 1) are applied to a document presentation from an encoder. Herein, what is the dropping rate? How many masks have been generated? - I am not sure TQA is a good enough benchmark for the proposed method. It can be seen from Table 1, none of the competing deep models could outperform BM25. Though the proposed DAR could boost DPR, I am not sure if such gain is meaningful. - The proposed methods are novel and intriguing. The confusion mostly comes from the implementation details and the results. Why not extend the efforts to a full paper and properly add all of the details given the extra space?
2) For the dropout, thru the reading of the response letter, my understanding is that multiple stochastic masks (w/ 0 and 1) are applied to a document presentation from an encoder. Herein, what is the dropping rate? How many masks have been generated?
3VD4PNEt5q
ICLR_2024
* The effectiveness of the proposed two-stage optimization approach needs further justifications. Only showing the performance drop on fusion models is not enough. Comparisons with other single-stage attacks are also needed to demonstrate the effectiveness. Without proper benchmarks and comparisons with other SOTA algorithms, it is hard to justify the effectiveness of the technical contributions. * How to ensure the feasibility of the adversarial patches? Since the gradient optimization may find patches in the undeployable areas e.g., sky, can the proposed approach ensure the attack is feasible in the real physical world? Also in the paper, the author assumes the lidar data would not be changed. Since the patch may influence the lidar intensity or introduce extra points, please provide justifications for this assumption.
* The effectiveness of the proposed two-stage optimization approach needs further justifications. Only showing the performance drop on fusion models is not enough. Comparisons with other single-stage attacks are also needed to demonstrate the effectiveness. Without proper benchmarks and comparisons with other SOTA algorithms, it is hard to justify the effectiveness of the technical contributions.
BpKbKeY0La
ICLR_2025
1. The texts in Figure 6 and Table 1 are too small. 2. This paper does not provide the type of GPUs, and inference time when testing. 3. Lack of visual comparisons with SUPIR[1]. 4. The visual comparisons are not thorough. [1] Gu Jinjin, et al. Pipal: a large-scale image quality assessment dataset for perceptual image restoration.
2. This paper does not provide the type of GPUs, and inference time when testing.
ICLR_2021_242
ICLR_2021
In the paper the motivation of using meta-gradient to solve the formulated Lagrangian optimization is only explained once at the beginning of Page 4 "Our intuition is that a learning rate gradient that takes into account the overall task objective and constraint thresholds will lead to improved overall performance." However it is clear to me what explicitly do you want to achieve? Are you trying to find the "ground truth" λ ¯ (i.e., 1000 in the experiments)? Does not seem to be the case; Do you want to somehow learn a "robust" policy that works well for all λ ¯ values? If so it is expected that the authors would show the empirical results on different λ ¯ values, and the proposed method works well in all of them. Following 1), the paper evaluates different forms of outer loss and show that using a critic-only outer loss yields the best performance. What is the intuition behind this? It would be good that the authors can have a more in-depth discussion on this. In the empirical evaluations, the paper defines a "penalized return". This is very un-intuitive -- in reality there is not always such quantized trade-off between reward and penalty of constraint violation. If there is, then one could directly add that to the objective. It would be more interesting to see, for example, given the same reward value, the proposed method always outperforms the baselines in penalty; or the other way round. The "penalized return" metric is therefore un-convincing to me. Assuming the penalized return metric makes sense. From Figure 3 it appears that the performances of the baselines sometimes are very close to the proposed method. What puzzles me is that the performances of the baselines vary dramatically across domains. Can the authors elaborate more on why this happens? Questions: See weaknesses points 1) 2) and 4) Less important points: In table 1 it seems that overall performance of RS-D4PG monotonically increases w.r.t. λ values. I am curious to see what happens when λ is even smaller. Page 3, Line 2, J o b j π ( θ ) -> τ and η are missing in the bracket Line 4 at paragraph D4PG: Q T ( s , . . . ) -> s'
1)2) and4) Less important points: In table 1 it seems that overall performance of RS-D4PG monotonically increases w.r.t. λ values. I am curious to see what happens when λ is even smaller. Page 3, Line 2, J o b j π ( θ ) -> τ and η are missing in the bracket Line 4 at paragraph D4PG: Q T ( s , . . . ) -> s'
6NaiZHL3l1
ICLR_2024
- The analysis about why the proposed framework works for evaluating the image inpainting algorithms is confusing. It seems that the framework tries to use the first inpainting method to recover the masked regions from the unmasked regions of the original image with a normal mask, and then employ another random inpainting method to recover the patch-masked regions from the unmasked regions of the first inpainted image with multiple patch masks that are different from the first normal mask. But why it works for better evaluating the first inpainting method is not clear. - The evaluation on the proposed benchmark is not convincing. The experiments are insufficient to validate the efficacy of the proposed metric, since not only the dataset but also the method are not comprehensive enough. And also some details are not clear, for example, what is the second inpainting method for these experiments. - The ablation study is not sufficient. There are many components that may influence the performance but are not analyzed, for example, what about to choose different second inpainting method, what about to choose different sub-metric, what about to choose different dataset, what about to choose different mask type, etc. - It seems that the Perceptual Metric in Figure 2 should connect the Second Inpainted Images with the Inpainted Image but not the Images Masked by Second Masks.
- It seems that the Perceptual Metric in Figure 2 should connect the Second Inpainted Images with the Inpainted Image but not the Images Masked by Second Masks.
NIPS_2018_83
NIPS_2018
- An argument against DEN, a competitor, is hyper-parameter sensitivity. First, this isn't really shown, but second (and more importantly) reinforcement learning is well-known to be extremely unstable and require a great deal of tuning. For example, even random seed changes are known to change the behavior of the same algorithm, and different implementation of the same algorithm can get very different results (this has been heavily discussed in the community; see keynote ICLR talk by Joelle Pineau as an example). This is not to say the proposed method doesn't have an advantage, but the argument that other methods require more tuning is not shown or consistent with known characteristics of RL. * Related to this, I am not sure I understand experiments for Figure 3. The authors say they vary the hyper-parameters but then show results with respect to # of parameters. Is that # of parameters of the final models at each timestep? Isn't that just varying one hyperparameter? I am not sure how this shows that RCL is more stable. - Newer approaches such as FearNet [1] should be compared to, as they demonstrated significant improvement in performance (although they did not compare to all of the methods compared to here). [1] FearNet: Brain-Inspired Model for Incremental Learning, Ronald Kemker, Christopher Kanan, ICLR 2018. - There is a deeper tie to meta-learning, which has several approaches as well. While these works don't target continual learning directly, they should be cited and the authors should try to distinguish those approaches. The work on RL for architecture search and/or as optimizers for learning (which are already cited) should be more heavily linked to this work, as it seems to directly follow as an application to continual learning. - It seems to me that continuously adding capacity while not fine-tuning the underlying features (which training of task 1 will determine) is extremely limiting. If the task is too different and the underlying feature space in the early layers are not appropriate to new tasks, then the method will never be able to overcome the performance gap. Perhaps the authors can comment on this. - Please review the language in the paper and fix typos/grammatical issues; a few examples: * [1] "have limitation to solve" => "are limited in their ability to solve" * [18] "In deep learning community" => "In THE deep learning community" * [24] "incrementally matche" => "incrementally MATCH" * [118] "we have already known" => "we already know" * and so on Some more specific comments/questions: - This sentence is confusing [93-95] "After we have trained the model for task t, we memorize each newly added filter by the shape of every layer to prevent the caused semantic drift." I believe I understood it after re-reading it and the subsequent sentences but it is not immediately obvious what is meant. - [218] Please use more objective terms than remarkable: "and remarkable accuracy improvement with same size of networks". Looking at the axes, which are rather squished, the improvement is definitely there but it would be difficult to characterize it as remarkable. - The symbols in the graphs across the conditions/algorithms is sometimes hard to distinguish (e.g. + vs *). Please make the graphs more readable in that regard. Overall, the idea of using reinforcement learning for continual learning is an interesting one, and one that makes sense considering recent advances in architecture search using RL. However, this paper could be strengthened by 1) Strengthening the analysis in terms of the claims made, especially with respect to not requiring as much hyper-parameter tuning, which requires more evidence given that RL often does require significant tuning, and 2) comparison to more recent methods and demonstration of more challenging continual learning setups where tasks can differ more widely. It would be good to have more in-depth analysis of the trade-offs between three approaches (regularization of large-capacity networks, growing networks, and meta-learning). ============================================== Update after rebuttal: Thank you for the rebuttal. However, there wasn't much new information in the rebuttal to change the overall conclusions. In terms of hyper-parameters, there are actually more hyper-parameters for reinforcement learning that you are not mentioning (gamma, learning rate, etc.) which your algorithm might still be sensitive to. You cannot consider only the hyper-parameter related to the continual learning part. Given this and the other limitations mentioned, overall this paper is marginally above acceptance so the score has been kept the same.
- This sentence is confusing [93-95] "After we have trained the model for task t, we memorize each newly added filter by the shape of every layer to prevent the caused semantic drift." I believe I understood it after re-reading it and the subsequent sentences but it is not immediately obvious what is meant.
Xe6UmKMInx
ICLR_2025
- Many sentences and paragraphs are unclear. I have given my best to collect most examples but I might have forgotten some of them (examples listed below) - Many claims are insufficiently supported by evidence (either from other papers or experiments). Similarly, I listed multiple examples below. - The experiments are very limited and too simple (single digit addition, and a "spatial reasoning" task). It is expected that a model a model fine-tuned on the task performs better than zero-shot. - The comparison between the author's method and GPT-3 seems unfair since they evaluate models trained for the simple addition task (in-distribution/training distribution evaluation) against GPT-3 zero-shot capabilities. - The approach is surprising. The authors claim interest in "long chain reasoning" tasks such as mathematics (line 128), and tasks that require a variable amount of computation per token. At the same time, their approach relies on an auto-encoder that represents arbitrary-length inputs into a fixed-size tensor. As such, the approach does not look promising for long-horizon tasks. - Section 3.3 is unexpected. Before discussing the experiments, the authors describe many possible improvement on their method. - When I started reading this paper, I thought that the authors were planning to improve the reasoning capabilities of language models using a variable amount of *inference* computation per token. In section 3.4, the authors argue that they do not consider discrete diffusion models because of their slow *training* computation cost. However, a larger training cost could be acceptable if it produces a model with an adaptive computation mechanism. I would say those two questions (training vs inference costs) are separate, and it would be acceptable to study a model that is slow to train in their context. - In section 3.4, authors argue that discrete diffusion models "have reasoning potential", and cite [4](https://arxiv.org/abs/2305.18619). However, there are no reasoning experiments in [4](https://arxiv.org/abs/2305.18619) ### Examples of unclear sentences and paragraphs #### Abstract - "In this paper, we argue that the autoregressive nature of current language models are not suited for reasoning due to fundamental limitations, and that reasoning requires slow accumulation of knowledge through time". What do you mean by time? Are you talking about the sequence length? If yes, then I would argue that causal transformers are slowly creating a representation based on their input. - "...their reasoning is not handicapped by the order of the tokens in the dataset". Do you mean the order of tokens in the *sequence*? The order in the dataset should not have a direct influence if the data is shuffled correctly. #### Rest of the paper - Lines 39-41: " and often times the answer to easier subproblems lead to better answers for harder subproblems (e.g. geometry, algebra).": this is unclear to me. Can you provide me with 2 examples from geometry or algebra, where the answer to easier subproblems lead to better answers for harder subproblems? - Lines 41-42: "Even though recurrent models can perform many forward passes in latent space,": are you talking about recurrent neural networks? I am not aware of work on recurrent neural networks operating in a latent space, beyond processing embedded tokens, similarly to a regular, decoder only transformer. Additionally, Vanilla RNNs also spend a constant amount of computation per input token, just as transformers - Lines 52-53: "The required semantics might be representable by its token embeddings (e.g. spatial reasoning)": I dont understand this sentence, and the example neither, you need to be more explicit. - 158-160: "Second, we train a diffusion model such that the diffusion transformer denoises the target sequence compressed latent conditioned on the input sequence compressed latent.": unclear - 212: "We follow the standard DDPM approach to train $x_0$ and train the variance $\Sigma_\theta$" The best results in DDPM were obtained by predicting the noise, not $x_0$, hence you are not following the standard DDPM parameterization. ### Examples of insufficiently supported claims Lines 47-49: " LDMs generate a latent vector by iteratively denoising Gaussian noise throughout many timesteps, which intuitvely makes it more suitable for tasks that require ex- trapolating many facts over long horizons.": I don't see why iterative denoising is "intuitively" more suitable for "extrapolating many facts". I don't understand the relationship between Gaussian noise and extrapolation. - Lines 51-52: "LDMs perform reasoning in latent space which is semantically richer than discrete tokens." What evidence do you have that the latent space is richer? Are you saying this because you assume that continuous space are always richer? Since you are training your own auto-encoder, you need to demonstrate that the space you train is indeed richer. - 57-59: you are claiming that latent diffusion models are stronger than discrete diffusion models. I would argue the opposite. The recent breakthroughs in diffusion language modeling used a discrete diffusion formalism. See [1](https://arxiv.org/abs/2406.07524), [2](https://arxiv.org/abs/2310.16834), [3](https://arxiv.org/abs/2406.04329) for example. - Line 78-79: "diffusion models have been able to outperform generative adversarial networks on image generation benchmarks": yes, but you need a citation there - Lines 129-130: "Previous work has tried to tackle... but with limited success": citation needed - Lines 156-158: "This improves the reliability and efficiency, because diffusion models are more compute efficient a training smaller dimensional latent variables and input tokens inherently have different lengths": evidence needed - Lines 217-218: " we can always sample more efficiently using different samplers from the literature that trade off sample quality.": citation needed
- Line 78-79: "diffusion models have been able to outperform generative adversarial networks on image generation benchmarks": yes, but you need a citation there - Lines 129-130: "Previous work has tried to tackle... but with limited success": citation needed - Lines 156-158: "This improves the reliability and efficiency, because diffusion models are more compute efficient a training smaller dimensional latent variables and input tokens inherently have different lengths": evidence needed - Lines 217-218: " we can always sample more efficiently using different samplers from the literature that trade off sample quality.": citation needed
NIPS_2018_120
NIPS_2018
Writing - Primary MTL should be differentiated from alternative goals of MTL (such as improving the performance of all tasks or saving memory and computational cost by sharing computation in a single network) early on in the abstract and introduction - In the abstract, a point is made about residual connections in ROCK allowing auxiliary features to explicitly impact detection prediction. This needs to be contrasted with standard MTL where the impact is implicit. - The claim about ROCK handling missing annotations modalities is unclear. The MLT dataset needs to be described in Sec 4.3. - The introduction describes Transfer Learning (TL) and Fine-tuning (FT) as sequential MTL. I do not completely agree with this characterization. TL is a broader term for the phenomenon of learnings from one task benefitting another task. Fine-tuning is a sequential way of doing it. Standard MTL is a parallel means to the same end. - The first 2 paragraphs of introduction read like literature review instead of directly motivating ROCK and the problem that it solves. - Need to describe Flat MTL and include a diagram similar to Fig 2 that can be directly visually compared to ROCK. - Fig 1 is not consistent with Fig 2. Fig 2 shows one encoder-decoder per auxiliary task whereas Fig 1 shows a single shared encoder-decoder for multiple tasks. - L88-89 try to make the case that ROCK has similar complexity as the original model. I would be very surprised if this true because ROCK adds 8 conv layers, 3 pooling layers, and 1 fusion layer. Inference timings must be provided to make this claim. Weaknesses: Experiments - Ablation showing the contribution of each auxiliary task on object detection performance in ROCK as well as in standard MTL. This can be done by comparing the full model (DNS) with models where one task is dropped out (DS, NS, DN). - Cite MLT dataset in Table 2 caption and describe Geo, presumably some kind of geometric features. - More details are needed for reproducibility like backbone architecture, activations between conv layers, number of channels, etc. Weaknesses: Originality - While primary MTL setting considered in this work is very useful, especially in data-deficient domains, the main contribution of this work, the ROCK architecture, comes across as a little incremental. Summary My current assessment is that in spite of the weaknesses, the model is simple, reasonably well motivated and shows decent performance gains over appropriate baselines. Hence I am currently leaning towards an accept with a rating of "6". I am not providing a higher rating as of now because the writing can be improved in several places, an ablation is missing, and the novelty is limited. My confidence score if 4 because I have only briefly looked at [19] and [35] which seem to be the most relevant literature. --Final Review After Rebuttal-- The authors did present an ablation showing the effect of different tasks on multitask training as requested. A significant number of my comments were writing suggestions which I believe would improve the quality of the paper further. The authors have agreed to make appropriate modifications. In spite of some concern about novelty, I think this work is a valuable contribution towards an understanding of multitask learning. All reviewers seem to be more or less in agreement. Hence, I vote for an accept and have increased my rating to 7 (previously 6).
- Fig 1 is not consistent with Fig 2. Fig 2 shows one encoder-decoder per auxiliary task whereas Fig 1 shows a single shared encoder-decoder for multiple tasks.
ICLR_2022_21
ICLR_2022
However, some key architectural details can be clarified further for full reproducibility and analysis. Specifically: 1. How are historical observations combined with inputs known over all time given differences in sequence lengths (L vs L+M)? The text mentions separate embedding and addition with positional encoding, but clarifications on how the embeddings are combined and fed into the CSCM are needed. 2. Can each node attend to its own lower-level representation? From equation 2, it seems to be that only neighbouring nodes are attended to, based on the description of N_l^(s). 3. Do the authors have any guidelines on how to select S/A/C (and consequently N) for a given receptive field L? In addition, while the ablation analysis tests the impact of changing CSCM architectures, it would be good to evaluate the base performance without the PAM to determine the value added by attention. This would also provide a simple comparison vs dilated CNNs which have been used successfully in time series forecasting applications (e.g. WaveNet). Finally, could I double check which dataset was used for the ablation analysis as well? I seem to be having some difficulty lining the numbers in Tables 4-6 up with Table 3.
2. Can each node attend to its own lower-level representation? From equation 2, it seems to be that only neighbouring nodes are attended to, based on the description of N_l^(s).
NIPS_2016_133
NIPS_2016
--- The clarity of the main parts has clearly improved compared to the last version I saw as an ICML reviewer. Generally, it seems natural to investigate the direction of how causal models can help for autonomous agents. The authors present an interesting proposal for how this can be done in case of simple bandits, delviering both, scenarios, algorithms, mathematical analysis and experimental analysis. However, the contribution also has strong limitations. The experiments, which are only on synthetic data, seem too show that their Algorithms 1 and 2 outperform what they consider as baseline ("Successive Rejects") in most cases. But how strong of a result is that in the light of the baseline not using the extensive causal knowledge? In the supplement, I only read the proof of Theorem 1: the authors seem to know what they are doing but they spend to little time on making clear the non-trivial implications, while spending too much time on trivial reformulations (see more detailed comments below). In Section 4, assuming to know the conditional for the objective variable Y seems pretty restrictive (although, in principle, the results seems to be generalizable beyond this assumption), limiting the usefulness of this section. Generally, the significance and potential impact of the contribution now strongly hinges on whether (1) it (approximately) makes sense that the agent can in fact intervene and (2) it is a realistic scenario that strong prior knowledge in form of the causal DAG (plus conditionals P(PA_Y|a) in the case of Algorithm 2) are given. I'm not sure about that, since this is a problem the complete framework proposed by Pearl and others is facing, but I think it is important to try and find out! (And at least informal causal thinking and communication seem ubiquitously helpful in life.) Further comments: --- Several notes on the proof of Theorem 1 in the supplement: - Typo: l432: a instead of A - The statement in l432 of the supplement only holds with probability at least 1 - \delta, right? - Why don't you use equation numbers and just say which equation follows from which other equations? - How does the ineq. after l433 follow from Lemma 7? It seems to follow somehow from a combination of the previous inequalities, but please facilitate the reading a bit by stating how Lemma 7 comes into play here. - The authors seem to know what they are doing, and the rough idea intuitively makes sense, but for the (non-expert in concentration inequalities) reader it is too difficult to follow the proof, this needs to be improved. Generally, as already mentioned, the following is unclear to me: when does a physical agent actually intervene on a variable? Usually, it has only perfect control over its own control output - any other variable "in the environment" can just be indirectly controlled, and so the robot can never be sure if it actually intervenes (in particular, outcomes of apparent interventions can be confounded by the control signal through unobserved paths). For me it's hard to tell if this problem (already mentioned by Norbert Wiener and others, by the way) is a minor issue or a major issue. Maybe one has to see the ideal notion of intervention as just an approximation to what happens in reality.
- How does the ineq. after l433 follow from Lemma 7? It seems to follow somehow from a combination of the previous inequalities, but please facilitate the reading a bit by stating how Lemma 7 comes into play here.
ICLR_2023_2934
ICLR_2023
1. The main contribution of this paper is unclear. Although it claimed that the proposed method possesses 8 novel properties, they either somewhat overstated the ability or applicability of the proposed method or were not well-supported. The main idea of how the proposed method copes with dynamic large-scale multitasking is not clear. How the automation is achieved is also unclear. 2. A more comprehensive review should be provided. How existing works deal with dynamic multitasking problems and what is the scale of the problems they face are not well presented. 3. The problem studied in this paper is not well-formulated. It is not clear what is meant by "large-scale" as well as how large is large in this paper.
1. The main contribution of this paper is unclear. Although it claimed that the proposed method possesses 8 novel properties, they either somewhat overstated the ability or applicability of the proposed method or were not well-supported. The main idea of how the proposed method copes with dynamic large-scale multitasking is not clear. How the automation is achieved is also unclear.
NIPS_2021_2191
NIPS_2021
of the paper: [Strengths] The problem is relevant. Good ablation study. [Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones. [Minor comments] In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4? Fig1 and Fig2 in supplementary are the same Spelling Mistake line 93: It it requires… What does ‘… updated as model parameters’ mean in line 176 Do the authors mean Equation 7 in line 212? The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts.
- For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings.
Y3wpuxd7u9
ICLR_2024
* The claim of making use of “annotation guideline” may be an overstatement - this paper only considered label name, label description and few-shot examples, however, annotation guideline in IE domain are very complicated and was curated by linguists. E.g., For TACRED slot filling (https://tac.nist.gov/2015/KBP/ColdStart/guidelines/TAC_KBP_2015_Slot_Descriptions_V1.0.pdf), section 3.6 per:city_of_birth, they use “GPEs below the city level (e.g. 5 boroughs of New York City) are not valid fillers.“ as an example rule to guide annotators. The prompts proposed by this paper might not fully capture the depth of true guideline understanding. * The paper omits key references from the era before LLMs that discuss label descriptions and verbalization. Examples include: * Zero-Shot Relation Extraction via Reading Comprehension * An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing * Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction Small formatting issues: * Use ``’’ instead of ‘’’’. * Table 6 could be more user-friendly. Presenting exact numbers (like 10/10,000) would be clearer than percentages. Please see more in the question section.
* The claim of making use of “annotation guideline” may be an overstatement - this paper only considered label name, label description and few-shot examples, however, annotation guideline in IE domain are very complicated and was curated by linguists. E.g., For TACRED slot filling (https://tac.nist.gov/2015/KBP/ColdStart/guidelines/TAC_KBP_2015_Slot_Descriptions_V1.0.pdf), section 3.6 per:city_of_birth, they use “GPEs below the city level (e.g. 5 boroughs of New York City) are not valid fillers.“ as an example rule to guide annotators. The prompts proposed by this paper might not fully capture the depth of true guideline understanding.
8l2m7jctGv
EMNLP_2023
- The contribution is a combination of current methods from computer vision and incremental. - The illustration in fig. 1 is confusing, the symbol definition is different from what the authors use in the text. - The motivation for fuzzy-based token pruning is not clear. Even in imbalanced distributions, discarding tokens according to the importance score will not discard important tokens while retaining unimportant tokens. Given that tokens of lesser significance inherently possess a lower ranking compared to their more important counterparts, the pruning process inherently eliminates tokens of smaller importance. So why introducing uncertainty in pruning? More uncertainty are more likely to drop important tokens. - The fuzzy-based approach appears to limit the ability to distinguish between tokens characterized by high and low values. The alignment of the membership function with the concept of uncertainty remains unclear. - The experiment comparison is weak, the author only compare their method to the BERT-baseline. The author should compare their method to token pruning and token combination baselines.
- The experiment comparison is weak, the author only compare their method to the BERT-baseline. The author should compare their method to token pruning and token combination baselines.
ICLR_2021_512
ICLR_2021
- Important pieces of prior work are missing from the related work section. The paper seems to be strongly related to Tensor Field Networks (TFN) (Thomas et al. 2018), as both define Euclidean and permutation equivariant convolutions on point clouds / graphs. Furthermore, there are several other methods that operate on graph that are embedded in a Euclidean space, such as SchNet (Schütt et al 2017). The graph network methods currently discussed all do not include the point coordinates in their operations. Lastly, the proposed method operates globally linearly on features on a graph, equivariantly to permutations, which is done in prior work, e.g. Maron 2018. - The experimental section only compares to methods that in their convolution are unaware of the point coordinates (except for in the input features). A comparison to coordinate-aware methods, such as TFN or SchNet seems appropriate. - The core object, the isometric adjacency matrix G, is ill-defined. In Eq 1 it is defined trough the embedding coordinates and “the transformation invariant rank-2 tensor” T. This object is not defined in the paper, which makes section 3 very confusing to read. In section 3, it appears like that the defined objects D take the role of object G in the above, so what is the role of eq 1? - In section 3, the authors speak of “collections of rank-p tensors”. However, these objects seem to actually be tensors of the shape N^a x d^p, where N is the number of nodes, d is the dimensionality of the embedding, and a and p are natural numbers. These objects transform under both permutations and Euclidean transformations in the obvious way. Why not make this fact explicit? That would make section 3 much easier to read. It seems like that when p=0, then a=1, and when p>0, then a=2. Except for in sec 3.2.2, in which a p=3 tensor has a=1. - In Sec 3.2, what are f_in and f_out? Are these the dimensionalities of the tensor product representation? Or do they denote the number of copies of the representation? If it’s the former, I don’t see how the network is equivariant. If it’s the latter, I don’t understand the last paragraph of 3.2.2, which says 1H \in R^{N x f_in}, which looks like a 0-tensor. - Can the authors clarify “To achieve translation equivariance, a constant tensor can be added to the output collection of tensors.”? The proposed method seems to only lead to translation invariant features. I do not follow how adding a constant tensor leads to translation equivariance that is not invariance. - Am I correct in understanding that the method scales cubic with the number of vertices (e.g. eqs 4, 6)? Or is there some sparsity used in the implementation, but not mentioned? Should we expect a method of cubic complexity to scale to 1M vertices? In a naïve implementation, a fast modern GPU with 14.2E12 flops would need 20h for a single 1Mx1M matrix-matrix multiplication (1E18 floating point operations). - The authors claim the method scales to 1M vertices, but I cannot find this in the experiments. Table 4 speaks of 155k vertices. How did the authors determine the method scales to 1M vertices? Recommendation: In its current form, I recommend rejection of this paper. Section 3 is insufficiently clear written, the related work lack important references to prior work and the experiments lack a comparison to potentially strong other methods. This is a shame, because I’d like to see this paper succeed, as the core idea is very strong. Significant improvements in the above criticisms can improve my score. Suggestions for improvement: - Be clear about what the G object is and what eq 1 means. - Be explicit about types the objects, be more explicit about the indices that refer to the permutation representation, to the indices that refer to the Euclidean representation and the indices that refer to copies of the same representation. I think there is an opportunity to be more clear, more explicit, while reducing notational clutter. - Expand the related work section - Compare to the strong baselines that use the coordinates. - Provide argumentation for the claim to scale to 1M vertices. Minor points: - Eq 7, \times should be \otimes? - Eq 14, what is j? - The authors write: “A, B and C are X, Y and Z respectively”. Perhaps this could be re-written to the easier to read “A=X, B=Y and C=Z”. This happens each time the word “respectively” is used. - Table 3 typo, gluster -> cluster Post rebuttal The authors addressed all my concerns and strongly improved their paper. I think it is now a good candidate for acceptance, as it provides an interesting alternative to / variation on tensor field networks. I raise my rating from 4 to 7.
- The experimental section only compares to methods that in their convolution are unaware of the point coordinates (except for in the input features). A comparison to coordinate-aware methods, such as TFN or SchNet seems appropriate.
NIPS_2019_737
NIPS_2019
- The paper is well organized, however it might be hard for the readers to reproduce the results without the code, especially in Section 4. A step to step illustration of the optimization algorithm would help. - The authors did not show the possible weaknesses of the proposed model.
- The authors did not show the possible weaknesses of the proposed model.
ARR_2022_14_review
ARR_2022
-The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
-The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model.
NIPS_2016_355
NIPS_2016
- Unfortunately, one major take-away here isn't particularly inspiring: e.g. there's a trade-off between churn and accuracy. - Also, the thought of having to train 30-40 models to burn in in order to test this approach isn't particularly appealing. Another interesting direction for dealing with churn could be unlabelled data, or applying via constraints: e.g. if we are willing to accep X% churn, and have access to unlabeled target data, what's the best way to use that to improve the stability of our model?
- Also, the thought of having to train 30-40 models to burn in in order to test this approach isn't particularly appealing. Another interesting direction for dealing with churn could be unlabelled data, or applying via constraints: e.g. if we are willing to accep X% churn, and have access to unlabeled target data, what's the best way to use that to improve the stability of our model?
NIPS_2020_1384
NIPS_2020
1. Why “side information” is defined as the input features for an input example? Moreover, the function of “side information” should be further demonstrated by experiments. 2. The introduction of related work is not sufficient, and more work on GLN should be given to reflect the advantages or difference of the proposed method, such as the difference from B-GLN. 3. More detail experimental analysis should be connected with your objective function.
2. The introduction of related work is not sufficient, and more work on GLN should be given to reflect the advantages or difference of the proposed method, such as the difference from B-GLN.
NIPS_2016_115
NIPS_2016
weakness of the paper, particularly considering the fact that the techniques described have already been published, is the lack of further experimental evidence of the efficacy of the approach. At this point there is certainly no shortage of more significant benchmarks whose state-of-the-art systems are RNNs. From a performance standpoint this could be a very influential paper, but based on the current experiments, the true potential of the approach is not clear. Other minor comments: - hypers -> hyperparameters - why only one dropout rate for Moon's approach, while Variational dropout gets an input-output and a recurrent dropout parameter?
- hypers -> hyperparameters - why only one dropout rate for Moon's approach, while Variational dropout gets an input-output and a recurrent dropout parameter?
NIPS_2019_1270
NIPS_2019
of the two variants (does the VI version run faster in terms of wall-clock time, is it more sample efficient, does it generalize better, …?). Given the small size of the toy domain, other (brute-force, or inefficient sampling-based) methods could potentially be included as well, but it would be OK to dismiss them by showing results on a larger-scale task where competitor methods can no longer be applied. Another competitor for comparison in Fig 1 would be the Dirichlet estimate with (a) copying the action-distribution from the nearest observed neighbour state or (b) taking the average over all observed neighbour states within a certain radius. 2) Larger-scale experiments. Why were there no experiments with larger state-action spaces and non-trivial dynamics included (at least grid-worlds with walls, and other non-trivial tiles)? Currently it is hard to judge whether this was simply due to a lack of time or because the method has severe scalability issues. Very convincing experiments would be e.g. on simple video-game domains, (which naturally have a low-cardinality discrete state- and action-space) - simulators for such experiments are publicly available and comparison against other approaches would be easier. 3) Literature: there is a considerable body of literature on (hierarchical) inference in latent-variable models that is barely mentioned. E.g. in the language domain, topic models are mentioned but dismissed as “not discussing the problem of decision-making”. Can some of these methods be straight-forwardly be applied to the tasks/domains shown in the paper - it seems so, since the model in the paper is not explicitly used for decision-making. Please correct me if I’m wrong and discuss this in greater detail. Hierarchical inference for discrete-variable models is also discussed in the literature on Bayesian deep (reinforcement) learning and hierarchical representation learning with deep networks - importantly, these models are also optimized via variational inference (ELBO maximization), however under different approximate distributions (the emphasis is on differentiability, rather than closed-form expressions). What are the advantages/disadvantages compared to the presented method (scalability, data-efficiency, …)? I am of course happy to also see non-deep-neural-network approaches, but this literature must be discussed in order to put the method into perspective. See e.g. [1] for lots of up-to-date pointers to literature. [1] https://duvenaud.github.io/learn-discrete/ 4) Shortcomings of the method and implications of the simplifications/approximations. Please discuss the implications of the mean-field approximation for the variational distributions, beyond simply stating the mathematical form. The same applies for restricting \theta to be a scale parameter (line: 165) - ideally compare empirically against no restrictions and doing the full matrix inversion numerically (particularly since the experiments are on small domains), or against using a low-rank matrix factorization. Finally, please discuss the implications of using a square-exponential kernel - would it for instance still be suitable in grid-worlds with walls, or other situations where simple Euclidean distance of states is not indicative of the “generalizability” of state-dependent action-distributions. Originality: Medium - the derivation of the VI scheme and the EM scheme is interesting and novel, but replacing a sampling-based ELBO optimization with a VI-based one is a rather straightforward idea. Quality: Low - while the derivations are well-presented and sufficient detail is given, the experimental section lacks comparison against important methods. Some ablation studies and sensitivity-analysis w.r.t. Hyper-parameters would have been nice and results regarding larger-scale applications are crucially required to judge the significance of the approach. The literature-discussion lacks important parts. Clarity: High - the paper is generally well written. The only important improvement is a qualitative/informal discussion of some of the restrictions/approximations, such as the mean-field approximation - though the mathematical statement is sufficient in principle, adding one, two sentences would not hurt. Significance: Currently low - the method could potentially be quite significant, but this needs to be shown with experiments that compare against other state-of-the-art methods and larger-scale experiments. It remains unclear whether the same results could have been achieved with the sampling-based approach and where the advantages of the VI approach lie.
2) Larger-scale experiments. Why were there no experiments with larger state-action spaces and non-trivial dynamics included (at least grid-worlds with walls, and other non-trivial tiles)? Currently it is hard to judge whether this was simply due to a lack of time or because the method has severe scalability issues. Very convincing experiments would be e.g. on simple video-game domains, (which naturally have a low-cardinality discrete state- and action-space) - simulators for such experiments are publicly available and comparison against other approaches would be easier.
NIPS_2021_725
NIPS_2021
Comparing the occupational statistics computed by GPT2 vs those by the United States is very interesting and informative. However, the presentation on the methodology and the subsequent discussion is confusing to me. Particularly from section 3.4, I am not sure what “adj.” in equation (1) means and why “adj. Pred” is appropriate as a scaling factor. Would appreciate it if the authors could clarify and make this section clearer. The analysis of intersection effects is interesting but I fail to see a clear presentation on statistical significance of these results. It may be clearer if the authors could specify p-values on some regressors and offer some discussions. From Table 3, I also do not believe that average pseudo-R2 is necessarily a meaningful measure for the individual factor. The authors claim the contribution of “benchmarking the extent of bias relative to inherently skewed societal distributions of occupation associations”. However, I have some reservations as 1) the authors did not propose any quantitative measurement to the extent of occupation bias relative to real distributions in society; 2) the authors did not compare any models other than GPT2. Several sections of the paper read confusing to me. There is a missing citation / reference in Line 99, section 3.1. The notation \hat{D}(c) from Line 165, section 3.4 is unreferenced. The authors made great effort to acknowledge the limitations of their work.
1) the authors did not propose any quantitative measurement to the extent of occupation bias relative to real distributions in society;
hNkXTqDrfb
ICLR_2025
1. The generative process description for the data is somewhat unclear. Based on the current explanation, it seems that the Bayes optimal classifier might not need to rely on semantic features; syntactic features appear sufficient to solve the task in an asymptotic setup. This would havew undermined the notion that x2 captures specialized knowledge; specialized knowledge should provide benefits on a limited set of specific tasks, rather than being redundant to general knowledge. Figure 3 and the reference to Li et al. (2019) suggest that my interpretation is incorrect. E.g., in Li et al there’s a stochastic choice in generating features for each example (would 'easy' or 'hard' for this case)., but it is not what is happening here. This needs to be explained better. 2. I’m unclear on the rationale for assuming a block-diagonal structure (Eq. 2). In realistic settings, one wouldn't typically know which features are hard or easy in advance. 3. We normally use adaptive gradient methods rather than SGD. Would such a method, which rescales gradient components, affect the findings? For instance, might it amplify updates for weights associated with hard features (i.e., x2)? 4. What if t the initial learning phase is skipped and we started annealing immediately? How would it affect learning dynamics? 5. The decision to link optimization parameters (such as the learning rate in the second stage) to the data generation procedure seems unrealistic. Is the idea that this selection approximates parameters typically chosen through hyperparameter tuning? This paper demonstrates that specific parameters enable the model to acquire hard-to-learn knowledge in the later stage without undermining the easy-to-learn knowledge, but it doesn’t show that this outcome holds across a wide range of parameters or those commonly used in practice. 6. I would discourage the authors from using terms semantics and syntax as these seem to be misleading. More broadly, it is unclear to me if the two stages are results of the explicit choices in defining the generative process and the stage of learning, or would emerge under more broad range of settings. Overall, to fully grasp the importance of the assumptions, a careful reading of the proofs is necessary, which was challenging for me and would probably be challenging for conference reviewers in general. Given the paper's length (56 pages!), it seems more appropriate for a journal (e.g., JMLR).
3. We normally use adaptive gradient methods rather than SGD. Would such a method, which rescales gradient components, affect the findings? For instance, might it amplify updates for weights associated with hard features (i.e., x2)?
NIPS_2020_1433
NIPS_2020
I have a few concerns for the experiments ## How HMC-within-Gibbs is used as a baseline 1. The concrete variant used is NUTS-within-Gibbs. In fact, it might be worth checking just HMC-within-Gibbs because the no-U-turn criteria is known to be sub-optimal within Gibbs. A similar grid search on step size and step number can be done for HMC-within-Gibbs for a fair comparison. 2. By inspecting the code, the MCMC component for discrete variables are found to be a MH step. This should be emphasised in the section. 3. Continuing from (2), different MCMC components for discrete variables should be experimented. For example, use the standard particle Gibbs (conditional-SMC) sampler as a drop-in replacement of the MH sampler. Although I still expect M-HMC would be better because continuous and discrete variables can be proposed jointly, which is not possible for HMC-within-Gibbs. ## How NUTS is used in 4.2 1. How adaptation for NUTS is set up? 2. The outperforming of M-HMC over NUTs may be due to NUTS stuck in some of the modes for too long. Have you inspected if that is the case? 3. Continuing from (2), if that the case, using a larger step size manually or lower the targeted acceptance ratio might be simply better (assuming currently dual averaging is used for adaptation). Finally, too many contents on the correctness of the proposed methods are in the appendix. It would be better to spend more content on a sketch of proof using the lemmas in the appendix.
3. Continuing from (2), if that the case, using a larger step size manually or lower the targeted acceptance ratio might be simply better (assuming currently dual averaging is used for adaptation). Finally, too many contents on the correctness of the proposed methods are in the appendix. It would be better to spend more content on a sketch of proof using the lemmas in the appendix.
NIPS_2022_1590
NIPS_2022
1) One of the key components is the matching metric, namely, the Pearson correlation coefficient (PCC). However, the assumption that PCC is a more relaxed constraint compared with KL divergence because of its invariance to scale and shift is not convincing enough. The constraint strength of a loss function is defined via its gradient distribution. For example, KL divergence and MSE loss have the same optimal solution while MSE loss is stricter than KL because of stricter punishment according to its gradient distribution. From this perspective, it is necessary to provide the gradient comparison between KL and PCC. 2) The experiments are not sufficient enough. 2-1) There are limited types of teacher architectures. 2-2) Most compared methods are proposed before 2019 (see Tab. 5). 2-3) The compared methods are not sufficient in Tab. 3 and 4. 2-4) The overall performance comparisons are only conducted on the small-scale dataset (i.e., CIFAR100). Large datasets (e.g., ImageNet) should also be evaluated. 2-5) The performance improvement compared with SOTAs is marginal (see Tab. 5). Some students only have a 0.06% gain compared with CRD. 3) There are some typos and some improper presentations. The texts of the figure are too small, especially the texts in Fig.2. Some typos, such as “on each classes” in the caption of Fig. 3, should be corrected. The authors have discussed the limitations and societal impacts of their works. The proposed method cannot fully address the binary classification tasks.
2) The experiments are not sufficient enough. 2-1) There are limited types of teacher architectures. 2-2) Most compared methods are proposed before 2019 (see Tab.
oSdrJyb4UH
ICLR_2025
W1. This paper claims that the proposed method has good expressiveness. However, I found no (theoretical) analysis regarding the expressiveness. W2. The proposed method is actually pretty simple, and the rationale is simple, monophyly (lines 50-52), i.e., 2-hop neighbors are helpful for node classification on both homophilic and heterophilic graphs. Such a simplicity is good, but I found some defects unsolved. W2.1 Why are 2-hop neighbors useful for node classification on graphs regardless of whether they are homophilic or heterophilic? W2.2 It looks like the proposed method (from Eq. 1 to 3, Figure 2 c) still utilizes information from 1-hop neighbors. In my understanding, the core idea of this paper is to collect edge messages (from 1-hop neighbors) by Eq. 3 and then let the edge messages interact via a self-attention module Eq. 2. However, overall, no information from 2-hop neighbors is included. Again, this method is simple, but it is highly unclear why it is effective. W2.3 Why previous methods cannot capture the information from 2-hop neighbors? If the 2-hop information is that useful, I think many previously proposed methods should be able to capture it. E.g., GPRGNN [1], ACM-GNN [2]. *Minor weaknesses: * W3. Some baselines are missing [3-5]. After adding them back, the performance advantage of the proposed method is not that significant on datasets Roman Empire, A-ratings, and Tolokers. W4 The strategies proposed in Section 4.2 are a bit heuristic-based. W5. The code is not provided, which lowers the reproducibility of this study. [1] Chien, Eli, et al. "Adaptive Universal Generalized PageRank Graph Neural Network." International Conference on Learning Representations. [2] Luan, Sitao, et al. "Revisiting heterophily for graph neural networks." Advances in neural information processing systems 35 (2022): 1362-1375. [3] Zhao, Kai, et al. "Graph neural convection-diffusion with heterophily." Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2023. [4] Jang, Hyosoon, et al. "Diffusion probabilistic models for structured node classification." Advances in Neural Information Processing Systems 36 (2024). [5] Zheng, Amber Yijia, et al. "Graph Machine Learning through the Lens of Bilevel Optimization." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
2. However, overall, no information from 2-hop neighbors is included. Again, this method is simple, but it is highly unclear why it is effective.
ICLR_2023_4641
ICLR_2023
Weakness: Experiments are not sufficient. Table 1 only shows the comparison with MADUN, other state-of-the-art methods should be included. Experimental results are not convincing, particularly the CS-MRI reconstruction problem. The difference between different methods can hardly be observed in Fig. 8 and Fig. 9. The real dicom image is recommended to use as experiment data, not the png image. FastMRI challenge dataset would be a good choice. Inference speed should be compared between different methods.
9. The real dicom image is recommended to use as experiment data, not the png image. FastMRI challenge dataset would be a good choice. Inference speed should be compared between different methods.
Nq7yKYL0Bp
ICLR_2025
1. The proposed method underperforms when being conditioned on complex interlocking topologies, and this framework has not supported SSE annotation with beta sheets or others so far. 2. For binder design, ProtPainter just provides an empirical conformation estimation. Further optimization and validation are required. 3. For users facing real-world tasks, a more user-friendly, responsive interface that allows users to draw or drag proteins with ease.
2. For binder design, ProtPainter just provides an empirical conformation estimation. Further optimization and validation are required.
NIPS_2016_153
NIPS_2016
weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997.
4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997.
AkL2ID5rRV
ICLR_2025
[1] Clarification. Several details need to be clarified to better understand the model and the training strategy. (a) What does the model estimate w.r.t. the PBR parameters? Does it only estimate albedo? (b) With sampled metallic, roughness and lighting envmaps, do we apply the metallic and roughness globally? If yes: 1)what happens if the original CAD model is already associated with spatially-varying (SV) BRDF maps? 2) And if applied globally, does this strategy diminish the model's generation ability towards real images with complex SV materials? 3) Given a good portion of Objaverse models are assigned with PBR materials, does it benefit the training to also predict ground truth BRDF (roughness, metallic) without manually sampling and enforcing global roughness and metallic? (c) Is the split-sum approximation only applied to synthesizing estimated image from estimated representations, or it is also used to render ground truth images? Are ground truth images rendered on-the-fly for each batch in training? [2] Writing. Language issues are abundant and need to be fixed for a polished version. Examples: (a) L015: for what purposes? (b) L020: Need to introduce the full name of PBR before first use of the abbreviation. (c) L050, L235: Need to clarify 'dependence on images rendered under fixed and simple lighting conditions' of previous methods. Mostly previous methods use PBR materials and envmap base lighting similar to this paper, so it would be important to clarify this assertion. (d) L186: functionalities -> downstream applications of ... (e) L283: what is 'a richer set of equations'? [3] Additional evaluation results on images of complex lighting and materials. The paper is able to showcase the robustness towards complex lighting and materials in Fig. 9, however one scene is too few, and comparison with baselines on this setting is necessary to further justify the claim.
1)what happens if the original CAD model is already associated with spatially-varying (SV) BRDF maps?
NIPS_2017_114
NIPS_2017
- More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios. - The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages) - An evaluation on the more challenging STL-10 dataset would have been welcome. Comments - The SVNH evaluation suggests that the model is better than pi an temporal ensembling especially in the low-label scenario. With this in mind, it would have been nice to see if you can confirm this on CIFAR-10 too (i.e. show results on CIFAR-10 with less labels) - I would would have like to have seen what the CIFAR-10 performance looks like with all labels included. - It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down. - I'd be interested to see if the exponential moving average of the weights provides any benefit on it's own, without the additional consistency cost.
- More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios.
NIPS_2018_378
NIPS_2018
with any practical idea, the devil is in the details and execution. I feel that this paper has room for improvement to more thoroughly compare various choices or explain the choices better (e.g. choice of MeshNet and the underlying setting of multiple sites sharing the same exact deep learning model, etc. See below) Here is what I mean: 1. There is no comparison of using different network architectures. MeshNet may be just fine; however, it is not clear if other base learning methods can compare because it is impractical to assume MeshNet to represent a typical performance outcome 2. There is no baseline comparison: i.e. something like (H+N+B)_MAP that the last row of Table 2 should be compared to. Without a baseline comparison, it is not clear how much is lost with making the practical tradeoffs by using PWC. 3. Subsequent learning procedure is stated to be "much faster," (lines 158) but no formal comparison is given. I feel that this is an important comparison since the paper claims this is a difficulty individual sites have (lines 33-36) 4. What are the computational resources used to fit (aside from time) that makes continual learning easier? 5. Table 2 seems reasonable; however, I have questions about comparing apples to apples in the sense that comparisons should be made between using the same amount of data. For example, H->N and H->B use less data than H->N+B. Also, H->N->H and H->N->H use less data than H->N+B->H.
5. Table 2 seems reasonable; however, I have questions about comparing apples to apples in the sense that comparisons should be made between using the same amount of data. For example, H->N and H->B use less data than H->N+B. Also, H->N->H and H->N->H use less data than H->N+B->H.
NIPS_2019_1200
NIPS_2019
of the proposed method would be highly appreciated and useful. Clarity: The paper is clearly written and easy to read. Most of the notions introduced are well explained and the paper is well organized. Significance: Extending RPCA to nonlinear settings is an important research topic and could prove to have a high impact in practical applications. However I am not fully convinced by the experiments and some more detailed comments are presented below. General comments/questions: • A short introduction to RPCA would be useful for readers who are not very familiar with this technique and to see how NRPCA connects to RPCA. • There is some work on nonlinear settings connected to RPCA that the paper would need to make the connection to and possibly compare with: o B. Dai et al 2018-Connections with robust PCA and the role of emergent sparsity in variational autoencoder models o Y. Wang et al – Green generative models • Overall, I would have liked to see a more detailed description of the optimization problem from Sect. 6 and maybe a bit less on the curvature estimation especially sect. 5.1 which introduces a lot of notation that is not used afterwards (could maybe go into the supplementary material). The closed-form in eq (13) is not straightforward, especially introducing the soft-thresholding operator. • The paper seems to take the approach that outliers need to be corrected, but I believe this depends on the application, and a discussion on correcting vs. identifying outliers would be relevant. How can we distinguish between these two very different tasks? Detailed comments: • Line 19: “finding the nodes of the graph” – not sure I understand • Line 20: The references cited, as least 16 (LLE) and 17 (Isomap) (I am not familiar with 14) do not use the graph Laplacian. Isomap uses multidimensional scaling to do the embedding, not the Laplacian. • Line 21: the literature on outlier detection seems quite old. • Does the methodology presented in Sect 2 work for non-Gaussian noise too? • Lines 55-61: is the neighbourhood patch defined with respect to X_i or \tilde{X}_i? I would believe it should be the noisy data \tilde{X}_i, but maybe I am missing something? • Line 57: consisting • Sect 5.2: I am not sure I understand the need to use \epsNN instead of kNN if anyway afterwards we have to pick randomly m points within the \epsNN? This is related to a more general comment: \epsNN is known to be difficult in very high dimensions where because of the curse of dimensionality almost all points tend to fall within a small neighbourhood and points tend to be equidistant. Why do the authors choose randomly m points within \epsNN instead of either using directly kNN with k=m or use all the points within \epsNN? • Is the neighbourhood size \eta the same for all points? Could this be a problem if the manifold is not sampled uniformly and the density varies? • The notations for the radius and the residual term – would help if they were different instead of having both as R. Maybe small r for the radius? • Eq (11) is a bit confusing as it uses both the X_i, X_i_j and p,q notations in the same equation and even the same term. Does the right term have \Sigma^2 (similar in eq (12))? • Would be good to have a derivation of (12) in the supplementary material. • The approximation in (11) seems to work for k sufficiently large, but that would include all the points in the limit. Also, this brings me back to the discussion on \epsNN vs kNN: if we need a very large k, why first do an \epsNN and then pick randomly m points? • Sect 6: writing L^i as a function of S is not straightforward and a more detailed derivation would be useful. If I understand correctly from Alg 1 the procedure in Sect 6 is iterative. If so, this should be mentioned and also explain how you choose the number of steps T or if there is a stopping criterion. • Lines 208-210: How could we know when the neighborhood is wrong because of the strong sparse noise in order to apply the solution outlined? • Line 216: is p=3? • In Fig. 1 I would start with the noisy data which is the input as in Fig. 4. Adding it at the very end is a bit counterintuitive. If I understand well the authors apply Alg 1 either with one iteration T=1 or with two iterations T=2? What happens for larger T? Usually iterative algorithms run until some criterion is fulfilled, with T >> 2. • Line 224: no reference to Laplacian eigenmaps, and was not cited either in the introduction. • Fig. 3: t-SNE is known to work very well on MNIST. A comparison would be useful. How do you explain the gaps in the denoised versions? Maybe a classification task would help to reveal how much the proposed method works better compared to existing approaches. What is the dimension of the NRPCA space, is it two? Could you also show those results if dim(NRPCA) = 2 like in Fig. 1? Could you please provide the parameters used for Laplacian eigenmaps and Isomap for the original vs. denoised versions? • Fig. 4: LLE is applied in the space of NRPCA if I understand correctly. What is the dimension of the NRPCA space, 2D? • It seems that NRPCA can be used either independently (just like any dimension reduction method) or as a preprocessing to other methods (LLE, Isomap etc). Would be useful to state this somewhere in the paper. How many iterations are used here for each figure? If I understand correctly, T=1 for second plot, and T=2 for the two rightmost plots. I’m still wondering what happens for larger T as in a previous comment? • Line 258: equally
4. Adding it at the very end is a bit counterintuitive. If I understand well the authors apply Alg 1 either with one iteration T=1 or with two iterations T=2? What happens for larger T? Usually iterative algorithms run until some criterion is fulfilled, with T >> 2. • Line 224: no reference to Laplacian eigenmaps, and was not cited either in the introduction. • Fig.
NIPS_2017_631
NIPS_2017
- I don't understand why Section 2.1 is included. Batch Normalization is a general technique as is the proposed Conditional Batch Normalization (CBN). The description of the proposed methodology seems independent of the choice of model and the time spent describing the ResNet architecture could be better used to provide greater motivation and intuition for the proposed CBN approach. - On that note, I understand the neurological motivation for why early vision may benefit from language modulation, but the argument for why this should be done through the normalization parameters is less well argued (especially in Section 3). The intro mentions the proposed approach reduces over-fitting compared to fine-tuning but doesn't discuss CBN in the context of alternative early-fusion strategies. - As CBN is a general method, I would have been more convinced by improvements in performance across multiple model architectures for vision + language tasks. For instance, CBN seems directly applicable to the MCB architecture. I acknowledge that needing to backprop through the CNN causes memory concerns which might be limiting. - Given the argument for early modulation of vision, it is a bit surprising that applying CBN to Stage 4 (the highest level stage) accounts for majority of the improvement in both the VQA and GuessWhat tasks. Some added discussion in this section might be useful. The supplementary figures are also interesting, showing that question conditioned separations in image space only occur after later stages. - Figures 2 and 3 seem somewhat redundant. Minor things: - I would have liked to see how different questions change the feature representation of a single image. Perhaps by applying some gradient visualization method to the visual features when changing the question? - Consider adding a space before citation brackets. - Bolding of the baseline models is inconsistent. - Eq 2 has a gamma_j rather than gamma_c L34 'to let the question to attend' -> 'to let the question attend' L42 missing citation L53 first discussion of batch norm missing citation L58 "to which we refer as" -> "which we refer to as" L89 "is achieved a" -> "is achieved through a"
- I don't understand why Section 2.1 is included. Batch Normalization is a general technique as is the proposed Conditional Batch Normalization (CBN). The description of the proposed methodology seems independent of the choice of model and the time spent describing the ResNet architecture could be better used to provide greater motivation and intuition for the proposed CBN approach.
ICLR_2021_541
ICLR_2021
The main weakness of the paper, in my opinion, is the sensitivity of the approach to the regularization parameter β . Specifically, the optimal parameter choice relies on 1) knowing the type of attack and 2) having access to attacked images. It would be great if the authors can provide insights about choosing a good β while being oblivious to the type of attack. Questions and comments for the authors At first glance, the method is counter-intuitive. If I understand correctly, the teacher model is fine-tuned with the first term of the total loss in Eq (3), i.e., starting from the poisoned model, you fine-tune with cross-entropy loss on the clean data. Hence, the optimal solution for the first term of the loss in (3) is the teacher model. The teacher model is also the minimizer for the second term in the loss. Therefore, I would expect the teacher model to be the optimal solution for minimizing (3). However, the paper shows that this is not the case, and optimizing (3) leads to a better model than the teacher model (i.e., the fine-tuned model). What am I missing? Have the authors considered top-down attention mechanisms, like CAM, GradCAM, and GradCAM++, to calculate the attention maps in place of the norm-based attentions used in the paper? Could you provide any insights on this? In "Fooling Network Interpretation in Image Classification" Subramanya et al. ICCV2019 propose adversarial patches that have a negligible effect on networks' attention (not in a backdoor setting). Given that the proposed method relies on attention, it would be interesting to see how it fairs against such attacks. In all your ``clean accuracy' plots (e.g., Fig 2 right panel), could you please provide a zoomed-in version of the curves as well? Evaluation logic Overall, I have a high opinion of this paper and appreciate the work the authors have put into writing a comprehensive article. I am scoring the paper as a 7. I would be happy to increase my score conditioned upon clarification of my questions and addressing the concerns. Post Rebuttal I thank the authors for their response. I have two responses: 1) I still don't find the answer to my Q1 convincing; in particular, the 'filtering effect of distillation' mechanism requires more rigorous discussion, and 2) with regards to the ICCV2019 paper, I think the authors may have misinterpreted my point; my point here is that one could design backdoor attacks that do not affect the attention maps substantially and was wondering if the logic would hold for such attacks. However, I agree with the authors that the points go beyond the scope of the current paper and would be interesting for potential future work. In any case, I think the paper is a good contribution to the field and would still vote for accepting the paper.
2) with regards to the ICCV2019 paper, I think the authors may have misinterpreted my point; my point here is that one could design backdoor attacks that do not affect the attention maps substantially and was wondering if the logic would hold for such attacks. However, I agree with the authors that the points go beyond the scope of the current paper and would be interesting for potential future work. In any case, I think the paper is a good contribution to the field and would still vote for accepting the paper.
NIPS_2017_53
NIPS_2017
Weakness 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. 2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later. 3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further. 4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation. 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable. 6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map? 7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences). 8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points: - L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?). - Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU? Perliminary Evaluation The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*). [A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. “Neural Module Networks.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799. [B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. “Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847. [C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. “Simple Baseline for Visual Question Answering.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167.
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (aren’t we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
ICLR_2023_2880
ICLR_2023
1. The first question is that the evidence of the motivation is not direct. Since the problem to be solved is that “a predictor suffers from the accuracy decline due to long-term and continuous usage”, the authors need to plot a figure about the decline in accuracy of a predictor over time (search steps) in different settings to support their claim. 2. Another question is why choose k = 2, 5, 2 in cifar-10, cifar-100, imagenet-16-120 in Table 1, while the result in Table 3 shows that the best k should be 5, 8, 2 ? The best results of the two tables do not seem to match. 3. Is there any related work about the mixed-batch method?
1. The first question is that the evidence of the motivation is not direct. Since the problem to be solved is that “a predictor suffers from the accuracy decline due to long-term and continuous usage”, the authors need to plot a figure about the decline in accuracy of a predictor over time (search steps) in different settings to support their claim.
NIPS_2021_1554
NIPS_2021
1 Why does Theorem 2 only show a second-order Taylor expansion of the excessive risk for group a, rather than similar result showing in Theorem 1? Since the unfairness defined in line 107 is based on excessive risk gap ξ a , it is more meaningful and consistant to see the theoretical results with respect to ξ a for DP-SGD in section 6. 2 In Section 9, the paper proposes a mitigation solution with extra terms. So how to determin appropriate values for γ 1 and γ 2 ? How do different values of γ 1 and γ 2 affect the performance of original tasks, such as the MSE or prediction accuracy? 3 Can authors explain more about the definition of excessive risk in line 103 and how to calculate in practice, in terms of expectation? Since the optimal solution θ ∗ is not the optimal solution for the loss function w.r.t. data of group a. It can be negative values, right? But I see all excessive risk values in Figure 3 and Figure 7 are positive. What's more, are values of excessive risk comparable among different groups? If not, can authors explain why excessive risk is a good representation for fairness?
3 Can authors explain more about the definition of excessive risk in line 103 and how to calculate in practice, in terms of expectation? Since the optimal solution θ ∗ is not the optimal solution for the loss function w.r.t. data of group a. It can be negative values, right? But I see all excessive risk values in Figure 3 and Figure 7 are positive. What's more, are values of excessive risk comparable among different groups? If not, can authors explain why excessive risk is a good representation for fairness?
ICLR_2022_1361
ICLR_2022
There are some issues that could weaken the arguments made in this paper. 1. Invariance of NGD The statement in Sec 2.3 assumes the Fisher information matrix (FIM) is non-singular (full-rank). The authors should clarify whether the FIMs considered in the Experiments in Sec 3.1 and Sec 4 are indeed non-singular. If the pseudoinverse is indeed employed, the authors should also show that the invariance of NGD or NGF also holds in singular cases. 2. Theorem 4 In Theorem 4, the authors assume that the Jacobian matrix is full-rank. The authors should give an example of a 2-layer (linear) NN to illustrate how this assumption is satisfied. I also wonder why the FIM is non-singular in this case since Eq 9 implies that the FIM is invertible. 3. Meaning of D Does D mean the number of input features? If so, I wonder whether the authors consider the most common cases when the number of input features (D) << the number of (training) data points (N) << the number of NN parameters. This question is closely related to pointer 4. I think all experiments considered should report these three numbers. 4. Empirical FIM approximations In Eq 2, the exact FIM is computed under the expectation w.r.t. x. In this work, the FIM is computed over a set of training data, which is known as an empirical approximation in statistics. If the number of data points << the number of input features, the empirical approximation of the FIM could be bad (see [1]). In experiment 2, the authors show that NGD could perform poorly. The authors should report the number of parameters used in this experiment. According to the caption of Figure 4, the number of training data points is 2500 and the number of input features is 1000. I do not think the empirical approximation of the FIM over 2500 training data points is good enough since the FIM should be at least a 1000-by-1000 matrix. I would like to see the performance of NGD when the number of input features is changed from 1000 to 50. For the sparse classification task, the authors may set the first 5 components to be 1 instead of the first 20 components. 5. Practical NGD with damping (ridge regression) In practice, I do not think the pseudoinverse is used due to the high computational cost. It will be great if the authors can comment on cases when damping is used. Does damping introduce extra inductive bias of NGD? 6. Initialization does play a role NGD is a discretization of NGF. Solving NGF is an initial value problem (IVP) in this setting. Thus, initialization should play a role such as pre-training. The statement about initialization should be more carefully stated. References [1] Kunstner, Frederik, Philipp Hennig, and Lukas Balles. "Limitations of the empirical Fisher approximation for natural gradient descent." Advances in Neural Information Processing Systems 32 (2019): 4156-4167.
6. Initialization does play a role NGD is a discretization of NGF. Solving NGF is an initial value problem (IVP) in this setting. Thus, initialization should play a role such as pre-training. The statement about initialization should be more carefully stated. References [1] Kunstner, Frederik, Philipp Hennig, and Lukas Balles. "Limitations of the empirical Fisher approximation for natural gradient descent." Advances in Neural Information Processing Systems 32 (2019): 4156-4167.
ARR_2022_295_review
ARR_2022
- The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
- It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
NIPS_2016_386
NIPS_2016
, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
* L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give.
ARR_2022_201_review
ARR_2022
I’m not convinced that AFiRe (the adversarial regularization) brings significant improvement, especially because - BLEU improvements are small (e.g., 27.93->28.64; would humans be able to identify the differences?) - Hyperparameter details are missing. - Human evaluation protocols, payment, etc. are all missing. Who are the raters? How are they "educated" and how do the authors ensure the raters provide good-faith annotations? What is the agreeement? Other baselines are not compared against. For example, what if we just treat the explanation as a latent variable as in Zhou et al. (2021)? https://arxiv.org/pdf/2011.05268.pdf A few other points that are not fatal: - Gold-standard human explanation datasets are necessary, given the objective in line 307. - Does it mean that inference gets slowed down drastically, and there’s no way to only do inference (i.e., predict the label)? I don’t think this is fatal though. What’s the coefficient of the p(L, E | X) term in line 307? Why is it 1? Hyperparamter details are missing, so it’s not clear whether baselines are well-tuned, and whether ablation studies provide confident results. The writing is not careful, and often impedes understanding. - Line 229: What’s t? - Line 230: What’s n? - Line 273: having X in the equation without defining it is a bit weird; should there be an expectation over X? - Sometimes, the X is not bolded and not italicized (line 262). Sometimes, the X is not bolded but italicized (line 273). Sometimes, the X is bolded but not italicized (line 156). - Line 296: L and E should be defined in the immediate vicinity. Again, sometimes L, E are italicized (line 296) and sometimes not (line 302). - Line 187: It’s best to treat Emb as a function. Having l’ and e’ as superscripts is confusing. - In Table 4, why sometimes there are punctuations and sometimes there are no punctuations? - Perplexity does not necessarily measure fluency. For example, an overly small perplexity may correspond to repeating common n-grams. But it’s okay to use it as a coarse approximation of fluency. - Line 191: \cdot should be used instead of regular dot Section 2.1: It would be best to define the dimensionalities of everything. - Line 182: A bit confusing what the superscript p means. - Line 229: What’s t? - Line 230: What’s n? - Line 255: Comma should not start the line.
- Line 296: L and E should be defined in the immediate vicinity. Again, sometimes L, E are italicized (line 296) and sometimes not (line 302).
ICLR_2022_3089
ICLR_2022
1. it is usually difficult to get the rules in real-world applications. Statistical rules learnt from data may be feasible. 2. the experimental section is a little weak. More experiments are required.
2. the experimental section is a little weak. More experiments are required.
rXNGpyxsLQ
ICLR_2025
1. The manuscript does not discuss any specific kernel implementations or hardware-level optimizations for SP-LoRA, which could limit its practical efficiency on specialized hardware. 2. The evaluations appear to be primarily conducted on smaller to medium-sized LLMs. It's unclear how SP-LoRA would scale to very large language models with hundreds of billions of parameters. 3. The method is mainly tested on models pruned by Wanda or SparseGPT, which may not cover all possible sparsity patterns. It's uncertain how SP-LoRA would perform with other sparsity types or pruning methods. 4. The manuscript could benefit from more extensive comparisons with a wider range of models and other parameter-efficient fine-tuning techniques beyond LoRA and SPP. 5. While the manuscript mentions evaluations on domain-specific tasks like math and code, the results presented for these tasks may not be comprehensive enough to fully demonstrate SP-LoRA's capabilities across various domains. 6. Although memory usage is optimized, the method may still introduce some computational overhead compared to standard LoRA, which could impact training time.
4. The manuscript could benefit from more extensive comparisons with a wider range of models and other parameter-efficient fine-tuning techniques beyond LoRA and SPP.
NIPS_2019_1089
NIPS_2019
- The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as “Strong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019” - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that “resulting in a correspondence of our HPFN to an even deeper ConAC”, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning “flexible and higher-order local and global intercorrelations“? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compact” -> “Despite being compact” 2. line 56: “We refer multiway arrays” -> “We refer to multiway arrays” 3. line 158: “HPFN to a even deeper ConAC” -> “HPFN to an even deeper ConAC” 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on… ****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization” which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones.
1. line 2: "Despite of being compact” -> “Despite being compact” 2. line 56: “We refer multiway arrays” -> “We refer to multiway arrays” 3. line 158: “HPFN to a even deeper ConAC” -> “HPFN to an even deeper ConAC” 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct.
ICLR_2021_2678
ICLR_2021
Weakness – W0 – There is no comparison to existing metrics, e.g., IS and FID to clearly show the advantages as the authors claimed in the paper, e.g., fewer samples and low variance. I think people now are using the combination of IS and FID in their experiments to measure quality and diversity. How are these two new metrics better than the existing combination? There is a bit disconnected between the concepts of the three proposed metrics, and I think it's still fair to compare the proposed metrics (visual quality and mode diversity) with the combination of IS and FID. W1 – The visual quality and mode diversity metrics use the face detection/verification frameworks which are quite specific to face datasets that have ground-truth labels. Can they apply to other domains rather than faces? If so, does it cause any bias? W3 – What is X − 1 in Eq. 6. Is it the inversion of matrix determination or number the division of the number of samples? W4 – Mistakes in Eqs. 3 and 10 and Table 3, should be gradient of ∇ x ^ instead of ∇ η . W5 – It looks like the paper is a bit rushed in this submission, there are sufficient missing details of implementations, e.g., hyper-parameters, batch size, detail of architecture, … are not provided. These factors may also have effects on the training of GAN. It would be interesting to have studies on these in future work. W6 – The paper's experiments are limited to one low-resolution dataset with standard GAN architectures. It's important to evaluate various GAN architectures and datasets for GAN assessment papers. It would be interesting to conduct the experiments also on state-of-the-art GAN models, e.g., SN-GAN, BigGAN, ProGAN, and StyleGAN, and with high-resolution datasets as well. W7 - Some metrics are interesting and likely to be valuable in the future, but can be more polished on experiments and comparison. It would be great if the current measures can be developed to extend for other datasets, e.g., CIFAR, ImageNet. W8 – The paper has derived some mathematics of turning points, but what is its meaning and how is it useful to derive this equation?
6. Is it the inversion of matrix determination or number the division of the number of samples? W4 – Mistakes in Eqs.
NIPS_2016_69
NIPS_2016
- The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset.
- The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images.
NIPS_2021_589
NIPS_2021
This paper does not discuss its limitation. Here are some of my questions and suggestions: 1. Does the proposed method perform better in pure combinational logic (without register), it seems it may be much easier to model without state related registers, it would be interesting to see a comparison between sequential design and combinational design. 2. How does it scale? What would be the sweet spot in terms of design complexity to train it on? 3. Is possible to extend this to analog model, probably at least system-verilog. 4. For related work, it would be great to separate them into non-ML based and ML-based, and have a section to state the novelty of the proposed method.
1. Does the proposed method perform better in pure combinational logic (without register), it seems it may be much easier to model without state related registers, it would be interesting to see a comparison between sequential design and combinational design.
NIPS_2020_1159
NIPS_2020
1. In the experiment section, the authors have designed a baseline that combines LDA and LSTM, namely LDA+LSTM. According to my understanding, this baseline can both capture the sequential information in text and provide the topic assignment for each word. I am curious to know the performance of this baseline in terms of the topic switch percent metric. 2. As the combination of RNNs and topic models, VRTM should also be evaluated with comparisons on the sentence generation capacity. It is better for the authors to demonstrate the performance of the sentence generation compared with some related models, such as RNNs, seq2seq model [1], and transformer [2], in terms of BLEU or ROUGE metrics. It is not sufficient to demonstrate the sentence generation capacity only by showing generated sentences. 3. In my opinion, it is necessary for the authors to analyze the impact of RNN-related component on the quality of learned topics and the effect of the topic model-related component on the quality of the generated sentences by comparisons, ablation experiments, or case studies, which would be valuable and provide more guidance for other researches in the related fields. [1] Sutskever, I., Vinyals, O., & Le, Q.V. (2014). Sequence to Sequence Learning with Neural Networks. ArXiv, abs/1409.3215. [2] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. ArXiv, abs/1706.03762.
1. In the experiment section, the authors have designed a baseline that combines LDA and LSTM, namely LDA+LSTM. According to my understanding, this baseline can both capture the sequential information in text and provide the topic assignment for each word. I am curious to know the performance of this baseline in terms of the topic switch percent metric.
cfuZKjGDW7
ICLR_2025
1. The contributions of this work seem small. TAO dataset is existing, and the contribution to the benchmark is large-scale annotations. The designed expander is also a simple regressor and data augmentation schemes are also based on existing ones. 2. The motivation of this task is unclear to me. When an object is totally occluded, its state, including position, size and motion, is very difficult to predict. Although authors consume much time to annotate such objects, but the quality can not be guaranteed because we do not know their real states. What are the potential downstream applications or benefits of amodal tracking that motivate this work? How might uncertainty in amodal predictions be handled or utilized in subsequent tasks?
2. The motivation of this task is unclear to me. When an object is totally occluded, its state, including position, size and motion, is very difficult to predict. Although authors consume much time to annotate such objects, but the quality can not be guaranteed because we do not know their real states. What are the potential downstream applications or benefits of amodal tracking that motivate this work? How might uncertainty in amodal predictions be handled or utilized in subsequent tasks?
7D4TPisEBk
EMNLP_2023
1. The paper lacks experiments on the Spider test set. Without experiments on this dataset, it is difficult to evaluate the generalizability of your proposed approach. 2. While I understand that GPT-4 is expensive, I suggest that you should include experiments with GPT-3.5, which is a more affordable option. This would provide a more comprehensive evaluation of your proposed approach. 3. The examples provided in the appendix are not detailed enough, and it is difficult to find the corresponding prompts for the different methods in Figure 3. I suggest that you provide more detailed examples to help readers better understand your approach. 4. While your proposed approach provides some evidence for in-context learning, I did not find any surprising or novel conclusions in your paper. I suggest that you further develop your approach to provide more significant contributions to the field.
2. While I understand that GPT-4 is expensive, I suggest that you should include experiments with GPT-3.5, which is a more affordable option. This would provide a more comprehensive evaluation of your proposed approach.
ACL_2017_31_review
ACL_2017
] See below for details of the following weaknesses: - Novelties of the paper are relatively unclear. - No detailed error analysis is provided. - A feature comparison with prior work is shallow, missing two relevant papers. - The paper has several obscure descriptions, including typos. [General Discussion:] The paper would be more impactful if it states novelties more explicitly. Is the paper presenting the first neural network based approach for event factuality identification? If this is the case, please state that. The paper would crystallize remaining challenges in event factuality identification and facilitate future research better if it provides detailed error analysis regarding the results of Table 3 and 4. What are dominant sources of errors made by the best system BiLSTM+CNN(Att)? What impacts do errors in basic factor extraction (Table 3) have on the overall performance of factuality identification (Table 4)? The analysis presented in Section 5.4 is more like a feature ablation study to show how useful some additional features are. The paper would be stronger if it compares with prior work in terms of features. Does the paper use any new features which have not been explored before? In other words, it is unclear whether main advantages of the proposed system come purely from deep learning, or from a combination of neural networks and some new unexplored features. As for feature comparison, the paper is missing two relevant papers: - Kenton Lee, Yoav Artzi, Yejin Choi and Luke Zettlemoyer. 2015 Event Detection and Factuality Assessment with Non-Expert Supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648. - Sandeep Soni, Tanushree Mitra, Eric Gilbert and Jacob Eisenstein. 2014. Modeling Factuality Judgments in Social Media Text. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 415-420. The paper would be more understandable if more examples are given to illustrate the underspecified modality (U) and the underspecified polarity (u). There are two reasons for that. First, the definition of 'underspecified' is relatively unintuitive as compared to other classes such as 'probable' or 'positive'. Second, the examples would be more helpful to understand the difficulties of Uu detection reported in line 690-697. Among the seven examples (S1-S7), only S7 corresponds to Uu, and its explanation is quite limited to illustrate the difficulties. A minor comment is that the paper has several obscure descriptions, including typos, as shown below: - The explanations for features in Section 3.2 are somewhat intertwined and thus confusing. The section would be more coherently organized with more separate paragraphs dedicated to each of lexical features and sentence-level features, by: - (1) stating that the SIP feature comprises two features (i.e., lexical-level and sentence-level) and introduce their corresponding variables (l and c) *at the beginning*; - (2) moving the description of embeddings of the lexical feature in line 280-283 to the first paragraph; and - (3) presenting the last paragraph about relevant source identification in a separate subsection because it is not about SIP detection. - The title of Section 3 ('Baseline') is misleading. A more understandable title would be 'Basic Factor Extraction' or 'Basic Feature Extraction', because the section is about how to extract basic factors (features), not about a baseline end-to-end system for event factuality identification. - The presented neural network architectures would be more convincing if it describes how beneficial the attention mechanism is to the task. - Table 2 seems to show factuality statistics only for all sources. The table would be more informative along with Table 4 if it also shows factuality statistics for 'Author' and 'Embed'. - Table 4 would be more effective if the highest system performance with respect to each combination of the source and the factuality value is shown in boldface. - Section 4.1 says, "Aux_Words can describe the *syntactic* structures of sentences," whereas section 5.4 says, "they (auxiliary words) can reflect the *pragmatic* structures of sentences." These two claims do not consort with each other well, and neither of them seems adequate to summarize how useful the dependency relations 'aux' and 'mark' are for the task. - S7 seems to be another example to support the effectiveness of auxiliary words, but the explanation for S7 is thin, as compared to the one for S6. What is the auxiliary word for 'ensure' in S7? - Line 162: 'event go in S1' should be 'event go in S2'. - Line 315: 'in details' should be 'in detail'. - Line 719: 'in Section 4' should be 'in Section 4.1' to make it more specific. - Line 771: 'recent researches' should be 'recent research' or 'recent studies'. 'Research' is an uncountable noun. - Line 903: 'Factbank' should be 'FactBank'.
2015 Event Detection and Factuality Assessment with Non-Expert Supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648.
ARR_2022_276_review
ARR_2022
- The evaluation of the paper could be made stronger by using some of the standard datasets for terminology translation (e.g. wmt21 shared task) and evaluation metrics (Alam et al. 2021). - The description of the alignment embedding seems a bit under-specified. Am I understanding it correctly that the constraints in the target sentence get an additional index that gets embedded (similar in concept to positional embeddings)? Are unaligned words in the constraints marked in a special manner? Are these embeddings recomputed in every refinement step of the LevT? - Line 232: The are surely sentences where not every bucket is represented, right? Would it be then more correct to say that you have **approximately** 6x the data? Or am I misunderstanding something? - Line 248: This explanation, while plausible, could be relatively easy to check just by looking at the words themselves. Have you done that? Have you tried filtering those words out (e.g. using stopword lists or similar) as they are unlikely to appear as constraints in real life situations? - Line 263: It would be better to use l_i. - Subsubsection starting at 324: Would it make sense to use the lexicon information directly if available (which it is for some test conditions) and resort to automatic tools only if necessary? Related to this, have you measured how sensitive the method is against alignment errors? This is also specially relevant for out-of-domain settings, where the alignment model is also operating in out-of-domain conditions. - Table 4: Please also include bold numbers for the baselines of previous work. Specifically for WMT17-WIKT the best result in terms of BLEU is actually in the baselines.
- Table 4: Please also include bold numbers for the baselines of previous work. Specifically for WMT17-WIKT the best result in terms of BLEU is actually in the baselines.
ARR_2022_143_review
ARR_2022
Weak: 1. More examples are preferred to understand the motivations, the novel part of the proposed method and the baselines (see “detailed questions and comments”); 2. Some higher level comparisons, such as between parametric and non- parametric solutions are preferred. Currently, most baselines are in the same technical line of kNN-MT which is too narrow to reflect the strength of the proposed algorithms/networks. Detailed questions and comments: 1. Table 1, what are the hardware used? Model sizes? For “speed comparison”. 2. Figure 1, what are the labels for horizontal and vertical axis? 3. Lines 088 to 089, hard to understand why it is “intuitively” since the figure 1 is a 2D description of high-dimension features/distributions, do you have any detailed data/experiments to support this “intuitively”? 4. Can you give real-world examples and attach them to your Figure 2? 5. Figure 3, can you give example real tokens, instead of “token A”, “token B”? it is a bit difficult to understand what are the “negative, positive, pivot” arrows in this figure. 6. Lines 170 to 171, “unreliable neighbors” any examples of “unreliable neighbors”? 7. Line 458, is “0.18 BLEU” a significant improvement? Do not understand if it is “impressive result” or not. 8. Table 6 is a bit difficult to understand. Can you first give references of using SP, LTP, HTP, and RP? Also why there are quite limit number of BLEU scores achieved by your “Ours method” higher than others? Can you also give speed/decoding comparison? Since based on this table, I am not sure why we shall rank your method to be higher than the other baselines. There is a big drop of from 46.94 to 46.03 of from “CKMT*” to “CKMT*+Ours”, any detailed analysis of this or any future work plan of this direction? 9. Table 11, why “adaptive kNN-MT” output so many “wrong translations”? how there examples are selected? 10. Section 2 “related work and background” is hard to understand. Intuitively, can you simply give a simple example of the difficulties of cross-domain translation (such as vocabulary difference, grammar difference and technical terms) and show that cluster based methods are helpful for this cross-domain translation. In addition, besides cluster based methods, can you also briefly summarize the major directions of dealing with “domain adaption for NMT”? if there is a comparison of among the major directions (not only other cluster-based methods), this paper will be ranked even higher (e.g., non-parameter solution vs. parameter solution for “domain adaption of MT”).
6. Lines 170 to 171, “unreliable neighbors” any examples of “unreliable neighbors”?
TFKIfhvdmZ
ICLR_2024
- The paper (still) needs improvements in the presentation to make it more interesting to a large machine learning audience. - The claim about the synergies between DQD and PPO looks insufficiently backed-up. In particular, the main paper does not even mention the TD3GA algorithm, while the study of combining DQD with TD3 is crucial to understand these synergies. More generally, your central claim is that using on-policy RL better fits the DQD framework, so the comparison to TD3GA should be central.
- The claim about the synergies between DQD and PPO looks insufficiently backed-up. In particular, the main paper does not even mention the TD3GA algorithm, while the study of combining DQD with TD3 is crucial to understand these synergies. More generally, your central claim is that using on-policy RL better fits the DQD framework, so the comparison to TD3GA should be central.
ACL_2017_288_review
ACL_2017
*I see few weaknesses in this paper. The only true one is the absence of a definition of style, which is a key concept in the paper - General Discussion: This paper describes two experiments that explore the relationship between writing task and writing style. In particular, controlling for vocabulary and topic, the authors show that features used in authorship attribution/style analysis can go a long way towards distinguishing between 1) a natural ending of a story 2) an ending added by a different author and 3) a purposefully incoherent ending added by a different author. This is a great and fun paper to read and it definitely merits being accepted. The paper is lucidly written and clearly explains what was done and why. The authors use well-known simple features and a simple classifier to prove a non-obvious hypothesis. Intuitively, it is obvious that a writing task greatly constraints style. However, proven in such a clear manner, in such a controlled setting, the findings are impressive. I particularly like Section 8 and the discussion about the implications on design of NLP tasks. I think this will be an influential and very well cited paper. Great work. The paper is a very good one as is. One minor suggestion I have is defining what the authors mean by “style” early on. The authors seem to mean “a set of low-level easily computable lexical and syntactic features”. As is, the usage is somewhat misleading for anyone outside of computational stylometrics. The set of chosen stylistic features makes sense. However, were there no other options? Were other features tried and they did not work? I think a short discussion of the choice of features would be informative.
*I see few weaknesses in this paper. The only true one is the absence of a definition of style, which is a key concept in the paper -
NIPS_2019_1102
NIPS_2019
1. It appears in Sections 6.1 and 6.2 that the tree-sliced Wasserstein distance outperforms the original optimal transport distance, which is surprising. Could you explain why this occurs? 2. The proof in the main text of Proposition 1 looks more like a proof sketch, particularly as the existence of a function f having the property you claim isn't immediately obvious. Could you include (in the supplement, at least) the full proof? --- UPDATE: I have read and I appreciate the authors' response. I will not be changing my score.
1. It appears in Sections 6.1 and 6.2 that the tree-sliced Wasserstein distance outperforms the original optimal transport distance, which is surprising. Could you explain why this occurs?
NIPS_2018_630
NIPS_2018
- While there is not much related work, I am wondering whether more experimental comparisons would be appropriate, e.g. with min-max networks, or Dugas et al., at least on some dataset where such models can express the desired constraints. - The technical delta from monotonic models (existing) to monotonic and convex/concave seems rather small, but sufficient and valuable, in my opinion. - The explanation of lattice models (S4) is fairly opaque for readers unfamiliar with such models. - The SCNN architecture is pretty much given as-is and is pretty terse; I would appreciate a bit more explanation, comparison to ICNN, and maybe a figure. It is not obvious for me to see that it leads to a convex and monotonic model, so it would be great if the paper would guide the reader a bit more there. Questions: - Lattice models expect the input to be scaled in [0, 1]. If this is done at training time using the min/max from the training set, then some test set samples might be clipped, right? Are the constraints affected in such situations? Does convexity hold? - I know the author's motivation (unlike ICNN) is not to learn easy-to-minimize functions; but would convex lattice models be easy to minimize? - Why is this paper categorized under Fairness/Accountability/Transparency, am I missing something? - The SCNN getting "lucky" on domain pricing is suspicious given your hyperparameter tuning. Are the chosen hyperparameters ever at the end of the searched range? The distance to the next best model is suspiciously large there. Presentation suggestions: - The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned. - "We have found it easier to be confident about applying ceterus paribus convexity;" -- the word "confident" threw me off a little here, as I was not sure if this is about model confidence or human interpretability. I suspect the latter, but some slight rephrasing would be great. - Unless I missed something, unconstrained neural nets are still often the best model on half of the tasks. After thinking about it, this is not surprising. It would be nice to guide the readers toward acknowledging this. - Notation: the x[d] notation is used in eqn 1 before being defined on line 133. - line 176: "corresponds" should be "corresponding" (or alternatively, replace "GAMs, with the" -> "GAMs; the") - line 216: "was not separately run" -> "it was not separately run" - line 217: "a human can summarize the machine learned as": not sure what this means, possibly "a human can summarize what the machine (has) learned as"? or "a human can summarize the machine-learned model as"? Consider rephrasing. - line 274, 279: write out "standard deviation" instead of "std dev" - line 281: write out "diminishing returns" - "Result Scoring" strikes me as a bit too vague for a section heading, it could be perceived to be about your experiment result. Is there a more specific name for this task, maybe "query relevance scoring" or something? === I have read your feedback. Thank you for addressing my observations; moving appendix D to the main seems like a good idea. I am not changing my score.
- "We have found it easier to be confident about applying ceterus paribus convexity;" -- the word "confident" threw me off a little here, as I was not sure if this is about model confidence or human interpretability. I suspect the latter, but some slight rephrasing would be great.
ICLR_2022_2663
ICLR_2022
of the paper and the reviewer’s questions. 1. Generating synthetic tabular data is non-trivial due to several factors. Following are two examples. a) It is common that multiple entries in a table are associated with the same entity, resulting in additional relationships in the table. 2) There can be missing values in the table. For instance, a feature of an entity includes two fields, one of which is whether the entity has the feature, and the other is the value of this feature if the entity has it. However, this paper’s discussion assumes the data with extreme simplicity and does not consider difficult and common problems in generating tabular data. 2. This paper proposes to use known causal relationships between features. However, in practice, prior knowledge is not always available and might be inaccurate to a specific subpopulation. This is the particular reason that most researchers focus on mining causal relationship from the data automatically. The reviewer is concerned about the practicality of this work. 3. This work reminds the reviewer of the multi-label classification task, which is similar to the tabular data generation task. Both tasks aim to approximate the joint distribution of a set of features. The multi-label classification can be resolved by decomposing the feature set with the chain rule, which relies on a proper order to apply the chain rule [1]. The review’s question is, is the order decided by causality is the most appropriate order for training? For example, given an SCM: A->B and A has a long-tail distribution while B has a uniform distribution, is modeling P(B A) better than modeling P(A B)? 4. Considering an SCM A1->A2->…->An (where n can be large), would there be an exposure bias problem in generating the corresponding sample with the proposed method? 5. Why does only the generator, but not the discriminator, use the causal structure? The reviewer thinks there should at least be some discussions about questions 2,3,4. [1] Vinyals O, Bengio S, Kudlur M. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391. 2015 Nov 19.
2. This paper proposes to use known causal relationships between features. However, in practice, prior knowledge is not always available and might be inaccurate to a specific subpopulation. This is the particular reason that most researchers focus on mining causal relationship from the data automatically. The reviewer is concerned about the practicality of this work.
1gqR7yEqnP
ICLR_2025
- strong overlap with non-cited work/lack of novelty: the authors centered their work around the observation that random labels offers substantial performance improvement which they claim is a novel finding ("We completely removed the structure from the learning process by randomizing the class labels, and found that the model actually was able to learn from data despite the complete randomization and even performed better from a generalization perspective."). In fact, this observation has been presented and discussed in several works in the past, including [1], which is not cited by the authors. - soundness of claims: the paper makes bold claims about "how neural networks learn" and what drives this process ("we present as provocative claim that the process of learning from data happens independently of human-imposed structures. To support this, we introduce the bold alternative hypothesis called the “Pan for Gold”. "). These claims remain conjectures and hypothesis which are only supported by empirical evidence that the network learns from random labels which does not prove the author's "pan for gold" hypothesis. Additionally, authors further justify the relevance of their work by relying on GradCam visualisation, a method proven to be unreliable -- as also mentioned by authors. - confidence in empirical findings: while the paper is well-written and clear, there is a lack of polishing of figures and of empirical results which impedes clarity and well as confidence in empirical results (e.g., missing axis labels, randomly masked out portions of curves, single seed experiments, core findings in section one are conducted on two small scale datasets and a single architecture type). - missing sections: the authors omit important sections to their work including a related work section and a discussion of the paper's limitations. [1] Bojanowski, Piotr, and Armand Joulin. "Unsupervised learning by predicting noise." International Conference on Machine Learning. PMLR, 2017.
- confidence in empirical findings: while the paper is well-written and clear, there is a lack of polishing of figures and of empirical results which impedes clarity and well as confidence in empirical results (e.g., missing axis labels, randomly masked out portions of curves, single seed experiments, core findings in section one are conducted on two small scale datasets and a single architecture type).
p7K3idvKTQ
ICLR_2025
1. The paper lacks specifics on the pre-training corpus for each domain. If models are pre-trained solely on training sets, this limits the utility of pre-training, affecting the reliability of conclusions. For instance, in the health domain, pre-training on a large corpus like PubMed abstracts is common to ensure domain knowledge. 2. Comparisons are primarily with pre-trained models from BAAI and OpenAI; adding more state-of-the-art domain-specific baselines could strengthen the evaluation. 3. While the work presents interesting findings, the novelty is limited. Observations like tighter CIs with fine-tuning are expected since task-specific fine-tuning generally increases confidence for a specific task while potentially reducing generalizability.
3. While the work presents interesting findings, the novelty is limited. Observations like tighter CIs with fine-tuning are expected since task-specific fine-tuning generally increases confidence for a specific task while potentially reducing generalizability.
ICLR_2022_2213
ICLR_2022
1. Based on some efforts to reproduce the results on my end, it is not clear how strongly the proposed observation holds which might limit the significance of the contributions of the paper. Comments and questions: 1. What would constitute distributional generalization in the setting of regression? If I consider the setting of regression for a moment, the phenomenon appears to be less surprising: a reasonably smooth regression model which interpolates the train data would necessarily exhibit distributional generalization in that setting. 2. It would help to see some error bars on the plots in Figure 2B. I tried replicating the toy example (classify CIFAR-10 classes as objects vs animals with label noise on cats) and observed that when label noise is 30% on the train data only 2-5% of cats in test set were being labeled as objects. The network used was a ResNet50 trained to train accuracy 96.2% using SGD with learning rate = 0.1, momentum = 0.9 and weight decay of 5e-4 and trained for ~160 epochs. However, I did observe that when the label noise is increased to 70%, the distributional generalization effect was seen more strongly (test cats labeled objects 60-80% of the time). 3. It would make for a stronger case if the paper reports the numbers observed when the label noise experiment is performed on image-net with 1000 classes as well (at least on the non-tail classes). This would further stress test the conjecture. Even if the phenomenon significantly weakens in this setting, the numbers are worth seeing. Minor comments: 1. AlexNet top-1 accuracy on ImageNet reported as 56.5%. Isn’t this 63.3%? 2. Another minor comment is on the name used to describe the phenomenon: distributional generalization sounds a bit strong to capture the empirical phenomenon presented. It represents the ideal of the total variation between the test and train distributions of the network’s outputs vanishing to zero which might not be the case. It is hard to draw this conclusion from a few test functions on which the outputs match.
3. It would make for a stronger case if the paper reports the numbers observed when the label noise experiment is performed on image-net with 1000 classes as well (at least on the non-tail classes). This would further stress test the conjecture. Even if the phenomenon significantly weakens in this setting, the numbers are worth seeing. Minor comments:
ICLR_2021_973
ICLR_2021
. Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well. Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates? Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4.
- Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates? Provide additional feedback with the aim to improve the paper.
NIPS_2017_143
NIPS_2017
For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant: - In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy? - In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.? - In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations). Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87: - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). - In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE). - Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize). - I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS: What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff? Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y.
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
N3a2vVk8vu
EMNLP_2023
1. The paper lacks specific details on how SUMMARIZER effectively eliminates irrelevant information. Particularly, it does not clearly define what qualifies as irrelevant information that should be avoided when providing input to the LLM. Additionally, there is a lack of an error study on SUMMARIZER, which could help determine whether there is a possibility of crucial information being mistakenly removed by the prompt. 2. The paper's failure to provide comprehensive details about the baseline models for comparison, such as REACT, is a notable limitation. The differences between ASH and REACT are not highlighted, making it challenging to discern the most crucial components of ASH that contribute to its superiority over existing approaches. 3. The evaluation in the paper is not sufficiently comprehensive and lacks transparency regarding the experiment setup. For instance, there is no mention of the number of different sets of in-content examples used in the experiments. Additionally, the paper does not explore the effects of varying the number of In-Context Examples. Moreover, the evaluation relies solely on one dataset, which may limit the generalizability of the results.
3. The evaluation in the paper is not sufficiently comprehensive and lacks transparency regarding the experiment setup. For instance, there is no mention of the number of different sets of in-content examples used in the experiments. Additionally, the paper does not explore the effects of varying the number of In-Context Examples. Moreover, the evaluation relies solely on one dataset, which may limit the generalizability of the results.
NIPS_2020_487
NIPS_2020
- The contrastive learning framework is the same as SimCLR. - Graph augmentation methods, such as DropNode, DropEdge, FeatureMask, have been adopted in previous GNNs work, such as [1,2]. [1] DROPEDGE: TOWARDS DEEP GRAPH CONVOLUTIONAL NETWORKS ON NODE CLASSIFICATION. [2] STRATEGIES FOR PRE-TRAINING GRAPH NEURAL NETWORKS.
- The contrastive learning framework is the same as SimCLR.
I8pdQLfR77
ICLR_2024
1. In Equation (5), AGeLU and AGeLU′ are introduced as two nonlinear functions. It prompts an intriguing inquiry: what would the outcome be if the division was into more parts, say four? A more comprehensive ablation study should be conducted to provide a richer understanding of the behavior and performance of these functions. 2. In Section 4.3, there lack of comparative experiments with other non-linear blocks like bottleneck in ResNet or linear bottleneck in MobileNetV2, which could have showcased the unique advantages or potential shortcomings of the proposed method in a broader context. 3. The proposed IMLP module has only been experimented with a few models like ViT and Swin, which have been proposed for several years. It raises the question of the module's effectiveness on more recent, higher-accuracy models like iFormer. The validation of the IMLP module across a broader spectrum of models could provide a clearer picture of its versatility and efficacy in current vision transformer landscapes.
2. In Section 4.3, there lack of comparative experiments with other non-linear blocks like bottleneck in ResNet or linear bottleneck in MobileNetV2, which could have showcased the unique advantages or potential shortcomings of the proposed method in a broader context.
ICLR_2023_73
ICLR_2023
At least in my opinion, Atari 100k is not particularly well benchmarked. I would say that the only "good" algorithm that has been benchmarked by experimenters incentivized to tune the algorithm to maximize performance is EfficientZero. Thus, the extent to which IRIS is good is a bit unclear to me. It would be interesting to see how IRIS performs on the standard Atari benchmark (or, inversely, how something like Dreamer compares to IRIS on Atari 100k). The statistical precipice paper suggests that results on Atari 100k are reliable (under appropriate metrics) with as few as 10 runs; the submission only uses 5 runs. My gut feeling is that the margin of improvement under various metrics seems substantial enough that it would probably continue to hold under a more reliable number of runs, but it would be good to actually show this. Addendum: I also concur with reviewer t9Kq: 1) the submission would benefit from additional attention to related work (such as [1],[2],[3]) and 2) additional ablations. Comments on decision-time planning: The submission argues: "Moreover, IRIS could be combined with MCTS, both in imagination and in the real environment. Therefore, methods involving lookahead search should not be seen as direct competitors but rather as potential extensions to learning-only methods." In principle, this is of course true. However, as a matter of practice, I am not sure it is as clear. I am not aware of any examples of a non-MuZero-like architecture successfully utilizing decision-time planning. It is plausible to me that algorithms like Dreamer and IRIS, which perform well in a background planning regime, may not enjoy much benefit from decision-time MCTS. Comments on superhuman performance: Most notably, human experts were surpassed by deep RL algorithms in a multitude of arcade (Mnih et al., 2015; Schrittwieser et al., 2020; Hafner et al., 2021), real-time strategy (Vinyals et al., 2019; Berner et al., 2019), board (Silver et al., 2016; 2018; Schrittwieser et al., 2020) and imperfect information (Schmid et al., 2021; Brown et al., 2020a) games. I find this sentence is misleading. Deep RL algorithms have not surpassed human experts in most Atari games, as is clearly evidenced by the human world record metric. They have also not surpassed human experts in StarCraft (AlphaStar is only grandmaster level) or DOTA (OpenAI5 played a restricted version of the game and was found to be reliably exploitable by humans).
1) the submission would benefit from additional attention to related work (such as [1],[2],[3]) and
NIPS_2019_387
NIPS_2019
- The main weakness is empirical---scratchGAN appreciably underperforms an MLE model in terms of LM score and reverse LM score. Further, samples from Table 7 are ungrammatical and incoherent, especially when compared to the (relatively) coherent MLE samples. - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? - There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version. - Some natural ablation studies are missing: e.g. how does scratchGAN do if you *do* pretrain? This seems like a crucial baseline to have, especially the central argument against pretraining is that MLE-pretraining ultimately results in models that are not too far from the original model. Minor comments and questions : - Note that since ScratchGAN still uses pretrained embeddings, it is not truly trained from "scratch". (Though Figure 3 makes it clear that pretrained embeddings have little impact). - I think the authors risk overclaiming when they write "Existing language GANs... have shown little to no performance improvements over traditional language models", when it is clear that ScratchGAN underperforms a language model across various metrics (e.g. reverse LM).
- There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version.
NIPS_2020_621
NIPS_2020
1. I believe the paper should have also focused on the algorithmic aspects of the solution. Once the concept of Blackwell winner is proposed, the novelty of the paper seems limited. 2. Some more details on the user study would have made the empirical claims stronger. 3. Minor: One thing that is problematic in the approach is the selection of target sets. I hope something can be done about it. Nevertheless, it is good that the authors acknowledge this limitation.
1. I believe the paper should have also focused on the algorithmic aspects of the solution. Once the concept of Blackwell winner is proposed, the novelty of the paper seems limited.
ARR_2022_13_review
ARR_2022
- Some design choices could have been justified in more detail and explained with more examples - The formalization is hard to read at times, with multiple Greek letters and subscripts for somewhat easy-to-grasp concepts - Multilingual coverage could of course be better, but the current limitation is understandable and acceptable given the large amount of manual work involved - L100: Such > This - L182-188: an example would be welcome - §3.2.1: I would have liked to see a discussion on the properties of these examples. Are they shorter/longer than the sentences seen in the average MT training corpora? Are they similar in style/genre/domain? Are these invented examples or are they taken from e.g. news sources? How big is the risk of finding these sentences in the training corpora? - L232: why not 2? I don't think that lemmas with only 2 senses are too easy... - L265-268: I would move this paragraph to §3.3.2. The current order suggests that §3.3.1 is done manually as well. - L293: Do I understand correctly that as soon as a sentence fails in one of the 5 languages it is discarded? Doesn't this place too harsh of a restriction on sentence selection? It probably doesn't hurt if the number of sentences is not exactly identical for all language pairs, or does it? - L349-353: What is the connection of the Isabelle et al. paper with DeepL? How certain is it that the Johnson et al. paper reliably depicts Google's current production system? - Figure 2: I would find it more intuitive to report the MISS percentages in a table, analogously to Table 3. Do the MISS percentages correlate with the accuracy figures? Or does it happen that systems with low MISS percentages are penalized for it by low accuracy figures? - Figure 3: If the red bars are parts of the grey bars, you could stack them instead of having two separate bars. - Table 4 and 5 would be more readable if they were split into two tables each, to have one table per measure. E.g. first put the 8 SFII columns and then the 8 SPDI columns rather than alternating between them. - §4.5: I didn't understand this section. An example could help. - §4.7: I don't understand why and how you need manual annotation here. The original dataset contains source sentences and good+bad target words. So you should be able to retrieve a translation from your five systems, change the target word, and have a quick manual check for agreement etc. Why do you need to translate the entire sentence manually? - Table 7: no need for decimal places if you only have 100 examples - Do you have an idea/conjecture why DeepL is so much better? Is this purely a data curation issue, or could this be due to some architectural changes? It's of course quite frustrating to see that somebody has basically solved the problem but doesn't say how :) - References: Please make sure that abbreviations and language names are capitalized. - Appendix: What does "Back to Model-specific Analyses." mean? - Appendix: Figure 5 has a column "%mistakes", but the other figures have "Accuracy". Does this mean that the mean accuracy of DeepL is actually 40% instead of 60%? Please check. - Appendix, Table 2: What does the slash in Russian and Chinese mean? Is there no second lexicalization? Why not just leave that line empty?
- Table 4 and 5 would be more readable if they were split into two tables each, to have one table per measure. E.g. first put the 8 SFII columns and then the 8 SPDI columns rather than alternating between them.
NIPS_2019_424
NIPS_2019
weakness of the current watermarking methods, namely the fact that they are prone to ambiuity attacks, - offers an analysis of the issue investigating the requirements that have to be fullfiled by any method that should withstand such attacks, - proposes such a method based on "passport layers" which are appended after convolutions. Overall the paper is well structured and the method is explained with enough detail to probably allow reimplementation. The text is clear enough with the exception of the experiments section, which would require some additional attention from the authors. Details follow below. Concerning the method I would be interested in seing how much does the performance (accuracy) suffer because of including the passports (no passports vs. the V1 setting) and because of the multi-task setting (V2/3 vs V1). In general a comparison of the three proposed settings V1, V2, V3 is missing from the experiments/discussion. Specific comments to the experiments follow: - It is not clear whether the experiments use V1, V2, or V3? - It is not entirely clear what the Table 2 shows. I guess the numbers in parentheses are the accuracies either on the source task or after fine-tuning on the target task, and the numbers in front of the parentheses is the fraction of cases when the signature withstood the fine-tuning. Either the table headers or the legend should be improved. (Also please make the left and right tables symmetric in how the numbers are shown - with or without the "%" sign.) - In Fig. 4 legend, please specify the performance metric (accuracy?) instead of writing "DNN performances". - Consider reformulating sentences in the "Experiment results" section to make understanding the experiments easier. Especially paragraph on fine-tuning (L245-53), or sentences like "In this experiment..." (L255). Sometimes one has to search for the meaning as in the sentence "This type of weight pruning..." (L256) where it is not clear which special kind of weight pruning (if any) is refferred to. In subsection 4.2, it is not entirely clear what the "fake2" attack consists of, please clarify. - In Fig. 5, it would be helpful to specify what does "valid" and "orig" differ in. - Figures use too small font that makes reading them hard (especially Figs. 3 & 5). Please adapt the figures.
- In Fig. 5, it would be helpful to specify what does "valid" and "orig" differ in.
NIPS_2017_382
NIPS_2017
weakness that there is much tuning and other specifics of the implementation that need to be determined on a case by case basis. It could be improved by giving some discussion of guidelines, principles, or references to other work explaining how tuning can be done, and some acknowledgement that the meaning of fairness may change dramatically depending on that tuning. * Clarity The paper is well organized and explained. It could be improved by some acknowledgement that there are a number of other (competing, often contradictory) definitions of fairness, and that the two appearing as constraints in the present work can in fact be contradictory in such a way that the optimization problem may be infeasible for some values of the tuning parameters. * Originality The most closely related work of Zemel et al. (2013) is referenced, the present paper explains how it is different, and gives comparisons in simulations. It could be improved by making these comparisons more systematic with respect to the tuning of each method--i.e. compare the best performance of each. * Significance The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon.
* Originality The most closely related work of Zemel et al. (2013) is referenced, the present paper explains how it is different, and gives comparisons in simulations. It could be improved by making these comparisons more systematic with respect to the tuning of each method--i.e. compare the best performance of each.
bWXIut4pNM
EMNLP_2023
There are 3 potential changes that would improve this work: * First, something that didn't come across was the importance and intuition behind the choice of the similarity kernel. What types of kernels work best? Are there, e.g., cheap empirical metrics that can effectively estimate the clustering kernel in eq. 2? Could you estimate this similarity kernel through something very simple, such as SentenceBERT embeddings? What features is it capturing that makes it particularly good? * Including a comparison to one of the methods mentioned in the computer vision setting would have been more useful than comparing to, e.g. loss-based sampling. I understand that these are not always applicable and typically require a supervised set-up, but some of them can probably be adapted to language tasks relatively easily. * Not sure I understand why increasing the subset size in some cases actually hurts performance? e.g., Table 4. I think the paper could benefit by elaborating on this, and why some subsets of the dataset are actually harmful to model performance. -- Edit: I acknowledge the authors' comments below. Although I still have some questions/concerns about the subset ablations (and hope that the authors will eventually include a more granular analysis of why this happens, and if decoder only models exhibit some type of this behavior), I am moving to increase my soundness score as they have addressed my other comments.
* Including a comparison to one of the methods mentioned in the computer vision setting would have been more useful than comparing to, e.g. loss-based sampling. I understand that these are not always applicable and typically require a supervised set-up, but some of them can probably be adapted to language tasks relatively easily.
NIPS_2016_95
NIPS_2016
1. The time complexity of the learning algorithm should be explicitly estimated to proof the scalability properties. 2. In Figure 4, the time complexity for TRMF-AR({1,8}) and TRMF-AR({1,2,…,8}) seems to be the same. The reason should be explained.
1. The time complexity of the learning algorithm should be explicitly estimated to proof the scalability properties.
NIPS_2016_537
NIPS_2016
weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one.
NIPS_2018_356
NIPS_2018
The paper doesn't have one message. Theorem 3 is not empirically investigated. TYPOS, ETC - Abstract. To state that the papers "draws useful connections" is uninformative, if the abstract doesn't state *what* connections are drawn. - Theorem 1. Is subscript k (overloaded later in Line 178, etc) necessary? It looks like one can simply restate the theorem in terms of alpha -> infinity? - Line 137 -- do the authors confuse VAEs with GANs's mode collapse here? - The discussion around equation (10) is very terse, and not very clearly explained. - Line 205. True posterior over which random variables? - Line 230 deserves an explanation, i.e. why conditioning p(x_missing | x_observed, x) is easily computable. - Figure 3: which Markov chain line is red and blue? Label?
- The discussion around equation (10) is very terse, and not very clearly explained.