paper_id
stringlengths
10
19
venue
stringclasses
14 values
focused_review
stringlengths
128
8.15k
point
stringlengths
47
624
NIPS_2018_514
NIPS_2018
and Questions: 1) Can the results be improved so that ProxSVRG+ becomes better than SCSG for all minibatch sizes b? 2) How does the PL condition you use compare with the PL conditions proposed in “Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-Łojasiewicz Functions”, arXiv:1709.03014 ? 3) Line 114: Is the assumption really necessary? Why? Or just sufficient? 4) I think the paper would benefit if some more experiments were included in the supplementary, on some other problems and with other datasets. Otherwise the robustness/generalization of the observations drawn from the included experiments is unclear. Small issues: 1) Lines 15-16: The sentence starting with “Besides” is not grammatically correct. 2) Line 68: What is a “super constant”? 3) Line 75: “matches” -> “match” 4) Page 3: “orcale” – “oracle” (twice) 5) Caption of Table 1: “are defined” -> “are given” 6) Eq (3): use full stop 7) Line 122: The sentence is not grammatically correct. 8) Line 200: “restated” -> “restarted” 9) Paper [21]: accents missing for one author’s name ***** I read the rebuttal and the other reviews, and am keeping my score.
2) How does the PL condition you use compare with the PL conditions proposed in “Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-Łojasiewicz Functions”, arXiv:1709.03014 ?
NIPS_2020_911
NIPS_2020
1. While the idea of jointly discovering, hallucinating, and adapting is interesting, there is a complete lack of discussing the impact of adding additional parameters and additional computational effort due to the multi-stage training and the multiple discriminators. The authors should provide this analysis for a fair comparison with the baseline [31, 33, *]. 2. Splitting the target data into easy and hard is already explored in the context of UDA. 3. Discovering the latent domain from the target domain is already proposed in [24]. 4. The problem of Open Compound Domain Adaptation is already presented in [**]. 5. Hallucinating the latent target domains is achieved through an image translation network adapted from [5]. 6. Style consistency loss to achieve diverse target styles has been used in previous works. 7. While the existing UDA methods [31,33] only use one discriminator, it is unclear to me why authors have applied multiple discriminators. 8. The details of the discriminator have not been discussed. 9. I was wondering why including the hallucination part reduces the performance in Table 1(b). It seems like the Discover module with [31] performs better than (Discover + Hallucinate + [31]). Also, the complex adapting stage where the authors used multiple discriminators mostly brings performance improvement. More importantly, did authors try to run the baseline models [17, 25, 31, 33, 39] with a similar longer training scheme? Otherwise, it is unfair to compare with the baselines. 10. Since the authors mentioned that splitting the training process helps to achieve better performance, It could be interesting to see the results of single-stage and multi-stage training. 11. It is not well explained why the adaptation performance drops when K > 3. Also, the procedure of finding the best K seems ad hoc and time-consuming. 12. I am just curious to see how the proposed method performs in a real domain adaptation scenario (GTA5->CityScapes). [*] Fei Pan, Inkyu Shin, François Rameau, Seokju Lee, In So Kweon. Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision. In CVPR 2020. [**] Liu, Ziwei and Miao, Zhongqi and Pan, Xingang and Zhan, Xiaohang and Lin, Dahua and Yu, Stella X. and Gong, Boqing. Open Compound Domain Adaptation. In CVPR 2020.
1. While the idea of jointly discovering, hallucinating, and adapting is interesting, there is a complete lack of discussing the impact of adding additional parameters and additional computational effort due to the multi-stage training and the multiple discriminators. The authors should provide this analysis for a fair comparison with the baseline [31, 33, *].
2RQokbn4B5
ICLR_2025
1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation. 2. The proposed method for dataset size recovery is way too simple to offer any insights. 3. The authors only study dataset size recovery for foundation models fine-tuned with a few samples. However, this problem is very general and should be explored in a broader framework.
1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation.
NIPS_2021_2257
NIPS_2021
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully supervised pre-trained network. - The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL model make use of different kinds of data? - The authors compared two-crop and multi-crop augmentation in section 4, and observed that multi-crop augmentation yielded better performance. One important missing factor is the (possible) computation overhead of multi-crop strategies. My estimation is that it would increase the computation complexity (i.e., slowing the speed) of training. Therefore, one could argue that if we could train the two-crop baseline for a longer period of time it would yield better performance as well. To make the comparison fair, the computation overhead must be discussed. It can also be seen from Figure 7, for the KNN-MoCo, that the extra positive samples are fed into the network \emph{that takes the back-propagated gradients}. It will drastically increase training complexity as the network not only performs forward passing, but also the backward passing as well. - Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations. - In L157 the authors discussed that for transfer learning potentially only low- and mid-level visual features are useful. My intuition is that low- and mid-level features are rather easy to learn. Then how does it explain the model’s transferability increasing when we scale up pre-training datasets? Or the recent success of CLIPs? Is it possible that \emph{only} MoCo learns low- and mid-level features? Minor things that don’t play any role in my ratings. - “i.e.” -> “i.e.,”, “e.g.” -> “e.g.,” - In Eq.1, it’s better to write L_{contrastive}(x) = instead of L_{contrastive}. Also, should the equation be normalized by the number of positives? - L241 setup paragraph is overly complicated for an easy-to-explain procedure. L245/246, the use of x+ and x is very confusing. - It’s better to explain that “nearest neighbor mining” in the intro is to mine nearest neighbor in a moving embedding space in the same dataset. Overall, I like the objective of the paper a lot and I think the paper is trying to answer some important questions in SSL. But I have some reservation to confidently recommend acceptance due to the concerns as written in the “weakness” section, because this is an analysis paper and analysis needs to be rigorous. I’ll be more than happy to increase the score if those concerns are properly addressed in the feedback. The authors didn't discuss the limitations of the study. I find no potential negative societal impact.
- Section 4.2 experiments with AutoAugment as a stronger augmentation strategy. One possible trap is that AutoAugment’s policy is obtained by supervise training on ImageNet. Information leaking is likely. Questions - In L114 the authors concluded that for linear classification the pretraining dataset should match the target dataset in terms of being object or-scene centric. If this is true, is it a setback for SSL algorithms that strive to learn more generic representations? Then it goes back again to whether by combining two datasets SSL model can learn better representations.
NIPS_2019_1408
NIPS_2019
Weakness: 1. Although the four criteria (proposed by the author of this paper) for multi-modal generative models seem reasonable, they are not intrinsic generic criteria. Therefore, the argument that previous works fail for certain criteria is not strong. 2. Tabular data (seeing each attribute dimension as a modality) is another popular form of multi-modal data. It would interesting, although not necessary, to see how this model works for tabular data.
2. Tabular data (seeing each attribute dimension as a modality) is another popular form of multi-modal data. It would interesting, although not necessary, to see how this model works for tabular data.
ARR_2022_64_review
ARR_2022
1. The idea is a bit incremental and simply the extension of previous monolingual LUKE. 2.For the language-agnostic characters of entity representations, the paper has weak analysis on the alignment of entity representations. The authors could add more analysis about the multilingual alignment of entity representations and it would be better to have visualizations or case studies for different types of languages such as language family. We are also interested in whether entities from low-resourced languages are well aligned with the high-resourced ones.
2.For the language-agnostic characters of entity representations, the paper has weak analysis on the alignment of entity representations. The authors could add more analysis about the multilingual alignment of entity representations and it would be better to have visualizations or case studies for different types of languages such as language family. We are also interested in whether entities from low-resourced languages are well aligned with the high-resourced ones.
NIPS_2018_831
NIPS_2018
- I wasn't fully clear about the repeat/remember example in Section 4. I understand that the unrolled reverse computation of a TBPTT of an exactly reversible model for the repeat task is equivalent to the forward pass of a regular model for the remember task, but aren't they still quite different in major ways? First, are they really equivalent in terms of their gradient updates? In the end, they draw two different computation graphs? Second, at *test time*, the former is not auto-regressive (i.e., it uses the given input sequence) whereas the latter is. Maybe I'm missing something simple, but a more careful explanation of the example would be helpful. Also a minor issue: why are an NF-RevGRU and an LSTM compared in Appendix A? Shouldn't an NF-RevLSTM be used for a fairer comparison? - I'm not familiar with the algorithm of Maclaurin et al., so it's difficult to get much out of the description of Algorithm 1 other than its mechanics. A review/justification of the algorithm may make the paper more self-contained. - As the paper acknowledges, the reversible version has a much higher computational cost during training (2-3 times slower). Given how cheap memory is, it remains to be seen how actually practical this approach is. OTHER COMMENTS - It'd still be useful to include the perplexity/BLEU scores of a NF-Rev{GRU, LSTM} just to verify that the gating mechanism is indeed necessary. - More details on using attention would be useful, perhaps as an extra appendix.
- More details on using attention would be useful, perhaps as an extra appendix.
NIPS_2017_496
NIPS_2017
weakness of the paper is the experimental evaluation. The only experimental results are reported on synthetic datasets, i.e. MNIST and MNIST multi-set. As the objects are quite salient on the black background, it is difficult to judge the advantage of the proposed attention mechanism. In natural images, as discussed in the limitations, detecting saliency with high confidence is an issue. However, this work having been motivated partially as a framework for an improvement to existing saliency models should have been evaluated in more realistic scenarios. Furthermore, the proposed attention model is not compared with any existing attention models. Also, a comparison with human gaze based as attention (as discussed in the introduction) would be interesting. A candidate dataset is CUB annotated with human gaze data in Karessli et.al, Gaze Embeddings for Zero-Shot Image Classification, CVPR17 (another un-cited related work) which showed that human gaze based visual attention is class-specific. Minor comment: - The references list contains duplicates and the publication venues and/or the publication years of many of the papers are missing.
- The references list contains duplicates and the publication venues and/or the publication years of many of the papers are missing.
ICLR_2022_2403
ICLR_2022
1: The theoretical analysis in Theorem 1 is unclear and weak. It is unclear that what the error bound in Theorem 1 means. The authors need to analyze and compare the theoretical results to other comparable methods. 2: The title is ambiguous and may lead to inappropriate reviewers. 3: I see no code attached to this submission, which makes me a bit concerned about reproducibility.
1: The theoretical analysis in Theorem 1 is unclear and weak. It is unclear that what the error bound in Theorem 1 means. The authors need to analyze and compare the theoretical results to other comparable methods.
ICLR_2023_3381
ICLR_2023
The authors claim that they bridge an important gap between IBC [2] and RvS by modeling the dependencies between the state, action, and return with an implicit model on Page 6. However, noticing that IBC proposes to use the implicit model to model the dependencies between the state and action, I think the contribution of this paper is to introduce the return from RvS to the implicit model. Thus, the proposed method looks like a combination of IBC and RvS. The authors conduct experiments in Section 5.1 to show the advantages of the implicit model. However, such advantages are similar to IBC, which could hurt the novelty of this paper. The authors may want to highlight the novelty of the proposed method against IBC. The discussions of the empirical results in Sections 5.1 and 5.2.2 are missing. The authors may want to explain: 1) why the RvS method fails to reach either goal and converges to the purple point in Figure 4(b); 2) why the explicit methods perform better than implicit methods on the locomotion tasks. The pseudo-code of the proposed method is missing. [1] Søren Asmussen and Peter W Glynn. Stochastic simulation: algorithms and analysis, volume 57. Springer, 2007. [2] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Proceedings of the 5th Conference on Robot Learning. PMLR, 2022.
2) why the explicit methods perform better than implicit methods on the locomotion tasks. The pseudo-code of the proposed method is missing. [1] Søren Asmussen and Peter W Glynn. Stochastic simulation: algorithms and analysis, volume 57. Springer, 2007. [2] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Proceedings of the 5th Conference on Robot Learning. PMLR, 2022.
NIPS_2021_1527
NIPS_2021
Weakness: The unbalanced data scenario has not been properly explored by experiments. Under what circumstances can it be counted as an unbalanced data scenario, and what is the data ratio? Therefore, the experiments should not pay more attention to one given setting like TED, WMT, etc., but should construct unbalanced scenarios of different ratios by sampling data in one setting like WMT to verify this important issue. There is a lack of a reasonable ablation study on the upsampling parameter T, so we cannot confirm whether the oversampling overfit phenomenon will occur, and to what extent will the upsampling reach. Some baselines are missing in the experimental comparison, such as 1) giving different weights to the loss of unbalanced translation pairs so that in the later stages of training, there will be no situation where rich-resource pairs dominate the training loss; 2) the use of low-resource language pairs further finetune the multilingual model and use the method like R3F to maintain the generalization ability of the model. In some low-resource language translations from 1.2->2.0, although the improvement of 0.8 can be claimed, it is insignificant in a practical sense. Missing References: Aghajanyan, Armen, et al. "Better Fine-Tuning by Reducing Representational Collapse." International Conference on Learning Representations. 2020.
2) the use of low-resource language pairs further finetune the multilingual model and use the method like R3F to maintain the generalization ability of the model. In some low-resource language translations from 1.2->2.0, although the improvement of 0.8 can be claimed, it is insignificant in a practical sense. Missing References: Aghajanyan, Armen, et al. "Better Fine-Tuning by Reducing Representational Collapse." International Conference on Learning Representations. 2020.
NIPS_2016_314
NIPS_2016
I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. 4. The paper says that almost all existing methods estimate the corruption level at first. There is a high possibility that the same is happening in the initial layers of their Residual net. If so, the only advantage is that theirs is end to end. 5. The authors mention in the Related works section that the use of regularization helps the problem of image- restoration, but they don’t use any type of regularization in their proposed model. It would be great if the authors can address these points (mainly 1, 2 and 3) in the rebuttal.
1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same?
VwyTrglgmW
ICLR_2024
1. The authors claim that the existing PU learning methods will suffer a gradual decline in performance as the dimensionality of the data increases. It would be better if the authors can visualize this effect. This is very important as this is the research motivation of this paper. 2. Since the authors claim that the high dimensionality is harmful for the PU methods, have the authors tried to firstly implement dimension reduction via some existing approaches and then deploy traditional PU classifiers? 3. In problem setup, the authors should clarify whether their method belongs to case-control PU learning or censoring PU learning, as their generation ways of P data and U data are quite different. 4. The proposed algorithm contains Kmeans operation. Note that if there are many examples with high dimension, Kmeans will be very inefficient. 5. The authors should compare their algorithm with SOTA methods and typical methods on these benchmark datasets. 6. The figures in this paper are in low quality. Besides, the writing of this paper is also far from perfect.
1. The authors claim that the existing PU learning methods will suffer a gradual decline in performance as the dimensionality of the data increases. It would be better if the authors can visualize this effect. This is very important as this is the research motivation of this paper.
NIPS_2019_1397
NIPS_2019
weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable by Theorem 2.2. Therefore, Theorem 2.1 doesn't seem necessary in the main paper, and can be move to the supplement as a lemma to save space. Furthermore, a few important results could be moved from the supplement back to the main paper (e.g., Algorithm 1 and Table 2). Originality: The main results seem innovative to me in general. Although optimizing information-theoretic objective functions is not new, I find the new objective function adequately novel, especially in the treatment of the Q_i's in relation to TC(Z|X_i). Relevant lines of research are also summarized well in the related work section. Significance: The proposed methodology has many favorable features, including low computational complexity, good performance under (near) modular latent factor models, and blessing of dimensionality. I believe these will make the new method very attractive to the community. Moreover, the formulation of the objective function itself would also be of great theoretical interest. Overall, I think the manuscript would make a fairly significant contribution. Itemized comments: 1. The number of latent factors m is assumed to be constant throughout the paper. I wonder if that's necessary. The blessing of dimensionality still seems to hold if m increases slowly with p, and computational complexity can be still advantageous compared to GLASSO. 2. Line 125: For completeness, please state the final objective function (empirical version of (3)) as a function of X_i and the parameters. 3. Section 4.1: The simulation is conducted under a joint Gaussian model. Therefore, ICA should be identical with PCA, and can be removed from the comparisons. Indeed, the ICA curve is almost identical with the PCA curve in Figure 2. 4. In the covariance estimation experiments, negative log likelihood under Gaussian model is used as the performance metric for both stock market data and OpenML datasets. This seems unreasonable since the real data in the experiment may not be Gaussian. For example, there is extensive evidence that stock returns are not Gaussian. Gaussian likelihood also seems unfair as a performance metric, since it may favor methods derived under Gaussian assumptions, like the proposed method. For comparing the results under these real datasets, it might be better to focus on interpretability, or indirect metrics (e.g., portfolio performance for stock return data). 5. The equation below Line 412: the p(z) factor should be removed in the expression for p(x|z). 6. Line 429: It seems we don't need Gaussian assumption to obtain Cov(Z_j, Z_k | X_i) = 0. 7. Line 480: Why do we need to combine with law of total variance to obtain Cov(X_i, X_{l != i} | Z) = 0? 8. Lines 496 and 501: It seems the Z in the denominator should be p(z). 9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments: 10. The manuscript could be more reader-friendly if the mathematical definitions for H(X), I(X;Y), TC(X), and TC(X|Z) were state (in the supplementary material if no space in the main article). References to these are necessary when following the proofs/derivations. 11. Line 208: black -> block 12. Line 242: 50 real-world datasets -> 51 real-world datasets (according to Line 260 and Table 2) 13. References [7, 25, 29]: gaussian -> Gaussian Update: Thanks to the authors' for the response. A couple minor comments: - Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials. - Regarding the Gaussian evaluation metric, I think it would be helpful to include the comments as a note in the paper.
- Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials.
NIPS_2019_1420
NIPS_2019
Weakness - Not completely sure about the meaning of the results of certain experiments and the paper refuses to hypothesize any explanations. Other results show very little difference between the alternatives and unclear whether they are significant. - Lot of result description is needlessly convoluted e.g. "less likely to produce less easier to teach and less structured languages when no listener gets reset". ** Suggestions - A related idea of speaker-listener communication from a teachability perspective was studied in [1] - In light of [2], it's pertinent that we check that useful communication is actually happening. The differences in figures seem too small. Although the topography plots do seem to indicate something reasonable going on. [1]: https://arxiv.org/abs/1806.06464 [2]: https://arxiv.org/abs/1903.05168
- Lot of result description is needlessly convoluted e.g. "less likely to produce less easier to teach and less structured languages when no listener gets reset". ** Suggestions - A related idea of speaker-listener communication from a teachability perspective was studied in [1] - In light of [2], it's pertinent that we check that useful communication is actually happening. The differences in figures seem too small. Although the topography plots do seem to indicate something reasonable going on. [1]: https://arxiv.org/abs/1806.06464 [2]: https://arxiv.org/abs/1903.05168
ICLR_2022_497
ICLR_2022
I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out. Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and therefore, the CVAE essentially attempts to approximate the true distribution P. In such a sense, if the true distribution P is independent of the context (which is the case in the experiments in this paper), I do not see the rationale for having the scenarios conditioned on the context, which in theory does not provide any statistical evidence. Therefore, the rationale behind CVAE-SIP is not clear to me. If the goal is not to approximate P but to solve the optimization problem, then having the objective values involved as a predicting goal is reasonable; in this case, having the context involved is justified because they can have an impact on the optimization results. Thus, CVAE-SIPA to me is a valid method. - While reducing the scenarios from 200 to 10 is promising, the quality of optimization has decreased a little bit. On the other hand, in Figure 2, using K-medoids with K=20 can perfectly recover the original value, which suggests that K-medoids is a decent solution and complex learning methods are not necessary for the considered settings. In addition, I am also wondering the performance under the setting that the 200 scenarios (or random scenarios of a certain number from the true distributions) are directly used as the input of CPLEX. In addition, to justify the performance, it is necessary to provide information about robustness as well as to identify the case where simple methods are not satisfactory (such as larger graphs). Minor concerns: - Given the structure of the proposed CVAE, the generation process takes the input of z and c where z is derived from w . This suggests that the proposed method requires us to know a collection of scenarios from the true distribution. If this is the case, it would be better to have a clear problem statement in Sec 3. Based on such understanding, I am wondering about the process of generating scenarios used for getting K representatives - it would be great if codes like Alg 1 was provided. - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). - The structure of the encoder is not clear to me. The notation q_{\phi} is used to denote two different functions q(z w,D) and q ( c , D ) . Does that mean they are the same network? - It would be better to experimentally justify the choice of the dimension of c and z. - It looks to me that the proposed methods are designed for graph-based problems, while two-stage integer programming does not have to be graph problems in general. If this is the case, it would be better to clearly indicate the scope of the considered problem. Before reaching Sec 4.2, I was thinking that the paper could address general settings. - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization.
- The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization.
NIPS_2018_641
NIPS_2018
weakness. First, the main result, Corollary 10, is not very strong. It is asymptotic, and requires the iterates to lie in a "good" set of regular parameters; the condition on the iterates was not checked. Corollary 10 only requires a lower bound on the regularization parameter; however, if the parameter is set too large such that the regularization term is dominating, then the output will be statistically meaningless. Second, there is an obvious gap between the interpretation and what has been proved. Even if Corollary 10 holds under more general and acceptable conditions, it only says that uncertainty sampling iterates along the descent directions of the expected 0-1 loss. I don't think that one may claim that uncertainty sampling is SGD merely based on Corollary 10. Furthermore, existing results for SGD require some regularity conditions on the objective function, and the learning rate should be chosen properly with respect to the conditions; as the conditions were not checked for the expected 0-1 loss and the "learning rate" in uncertainty sampling was not specified, it seems not very rigorous to explain empirical observations based on existing results of SGD. The paper is overall well-structured. I appreciate the authors' trying providing some intuitive explanations of the proofs, though there are some over-simplifications in my view. The writing looks very hasty; there are many typos and minor grammar mistakes. I would say that this work is a good starting point for an interesting research direction, but currently not very sufficient for publication. Other comments: 1. ln. 52: Not all convex programs can be efficiently solved. See, e.g. "Gradient methods for minimizing composite functions" by Yu. Nesterov. 2. ln. 55: I don't see why the regularized empirical risk minimizer will converge to the risk minimizer without any condition on, for example, the regularization parameter. 3. ln. 180--182: Corollar 10 only shows that uncertainty sampling moves in descent directions of the expected 0-1 loss; this does not necessarily mean that uncertainty sampling is not minimizing the expected convex surrogate. 4. ln. 182--184: Non-convexity may not be an issue for the SGD to converge, if the function Z has some good properties. 5. The proofs in the supplementary material are too terse.
3. ln. 180--182: Corollar 10 only shows that uncertainty sampling moves in descent directions of the expected 0-1 loss; this does not necessarily mean that uncertainty sampling is not minimizing the expected convex surrogate.
NIPS_2016_394
NIPS_2016
- The theoretical results don't have immediate practical implications, although this is certainly understandable given the novelty of the work. As someone who is more of an applied researcher who occasionally dabbles in theory, it would be ideal to see more take-away points for practitioners. The main take-away point that I observed is to query a cluster proportionally to the square root of its size, but it's unclear if this is a novel finding in this paper. - The proposed model produces only 1 node changing cluster per time step on average because the reassignment probability is 1/n. This allows for only very slow dynamics. Furthermore, the proposed evolution model is very simplistic in that no other edges are changed aside from edges with the (on average) 1 node changing cluster. - Motivation by the rate limits of social media APIs is a bit weak. The motivation would suggest that it examines the error given constraints on the number of queries. The paper actually examines the number of probes/queries necessary to achieve a near-optimal error, which is a related problem but not necessarily applicable to the social media API motivation. The resource-constrained sampling motivation is more general and a better fit to the problem actually considered in this paper, in my opinion. Suggestions: Please comment on optimality in the general case. From the discussion in the last paragraph in Section 4.3, it appears that the proposed queue algorithm would is a multiplicative factor of 1/beta from optimality. Is this indeed the case? Why not also show experiment results for just using the algorithm of Theorem 4 in addition to the random baselines? This would allow the reader to see how much practical benefit the queue algorithm provides. Line 308: You state that you show the average and standard deviation, but standard deviation is not visible in Figure 1. Are error bars present but just too small to be visible? If so, state that it is the case. Line 93: "asymptoticall" -> "asymptotically" Line 109: "the some relevant features" -> Remove "the" or "some" Line 182: "queries per steps" -> "queries per step" Line 196-197: "every neighbor of neighbor of v" -> "neighbor of" repeated Line 263: Reference to Appendix in supplementary material shows ?? Line 269: In the equation for \epsilon, perhaps it would help to put parentheses around log n, i.e. (log n)/n rather than log n/n. Line 276: "issues query" -> I believe this should be "issues 1 query" Line 278: "loosing" -> "losing" I have read the author rebuttal and other reviews and have decided not to change my scores.
- The proposed model produces only 1 node changing cluster per time step on average because the reassignment probability is 1/n. This allows for only very slow dynamics. Furthermore, the proposed evolution model is very simplistic in that no other edges are changed aside from edges with the (on average) 1 node changing cluster.
NIPS_2021_1954
NIPS_2021
Some state-of-the-art partial multi-label references are missing, such as 1) Partial Multi-Label Learning with Label Distribution 2) Noisy label tolerance: A new perspective of Partial Multi-Label Learning 3) Partial multi-label learning with mutual teaching. The explanation of Theorem 1 is weak; the author should provide more explanations. Can the author do the experiments on the image data set?
2) Noisy label tolerance: A new perspective of Partial Multi-Label Learning
NIPS_2019_168
NIPS_2019
of the submission. * originality: This is a highly specialized contribution building up novel results on two main fronts: The derivation of the lower bound on the competitive ratio of any online algorithm and the introduction of two variants of an existing algorithm so as to meet this lower bound. Most of the proofs and techniques are natural and not surprising. In my view the main contribution is the introduction of the regularized version which brings a different, and arguably more modern interpretation, about the conditions under which these online algorithms perform well in these adversarial settings. * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. * significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems. * minor details/comments: - p.1, line 6-7: I would rewrite the sentence to simply express that the lower bound is $\Omega(m^{-1/2})$ \- p.3, line 141: cost an algorithm => cost of an algorithm \- p.4, Algorithm 1, step 3: mention somewhere that this is the projection operator (not every reader will be familiar with this notation \- p.5, Theorem 2: remind the reader that the $\gamma$ in the statement is the parameter of OBD as defined in Algorithm 1 \- p.8, line 314: why surprisingly?
* significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems.
ICLR_2021_457
ICLR_2021
My main concern is that it is not completely clear to me how the authors suggest using the dataset for developing AI that is more ethical. I can clearly understand that one can use it to train an auxiliary model that will test/verify/give value for RL etc. I can also see that using it to fine tune language models and test them as done in the paper, can give an idea of how the language representation is aligned with or represents well ethical concepts. But it seems that the authors are trying to claim something broader when they say ““By defining and benchmarking a model’s understanding of basic concepts in ETHICS…” and “To do well on the ETHICS dataset, models must know about the morally relevant factors emphasized by each of these ethical systems”. It sounds as if they claim that given a model one can benchmark it on the dataset. If that is the case, they should explain how (for example say I develop a model that filters CVs and I want to see if it is fair, how can I use the dataset to test that model?). If not, I would suggest being clearer about the way the dataset can be used. In addition, I personally do not like using language such as “With the ETHICS dataset, we find that current language models have a promising but incomplete understanding of basic ethical knowledge.” Or “By defining and benchmarking a model’s understanding of basic concepts in ETHICS, we enable future research necessary for ethical AI”. I think that even if a model can perform well on the ETHICS dataset, it is far from clear that it has understanding of ethical concepts. It is a leap of faith in my mind to conclude from what is essentially learning a classification task to ethical understanding. I would like to see the authors make more precise claims in that respect. Recommendation: I vote for accepting this paper, at its current state marginally above threshold but provided some clarifications, I find this a clear accept. I think the area of ethical AI is important, releasing a well-constructed dataset is an important step forward and overall this paper should be of interest to the ICLR community. Questions and minor comments: 1.There are missing details about division to train and test sets, numbers as well as how the division was made (simply random? Any other considerations?). These details should be added. 2. In the Impartiality section there is missing reference to Fig 2 – it is given only later so one does not see the relevant examples. Post-rebuttal comments: My concerns are resolved. I have changed my vote to acceptance. (7).
1.There are missing details about division to train and test sets, numbers as well as how the division was made (simply random? Any other considerations?). These details should be added.
NIPS_2019_651
NIPS_2019
(large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible. - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, …, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? 2. The presented theorems provide guarantees for the objective functions phi on data X and coreset C for a query Q. Table 1 reporting the relative errors suggests that there might be a substantial deviation between coreset and full dataset archetypes. However, the interpretation of archetypes in a particular application is when AA proves particularly useful (as for example in [1] or [2]). Is the archetypal interpretation of identifying (more or less) stable prototypes whose convex combinations describe the data still applicable? 3. Practically, the number of archetypes k is of interest. In the presented framework, is there a way to perform model selection in order to identify an appropriate k? 4. The work in [3] might be worth to mention as a related approach. There, the edacious nature of AA is approached by learning latent representation of the dataset as a convex combination of (learnt) archetypes and can be viewed as a non-linear AA approach. [1] Shoval et al., Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space, Science 2012. [2] Hart et al., Inferring biological tasks using Pareto analysis of high-dimensional data, Nature Methods 2015. [3] Keller et al., Deep Archetypal Analysis, arxiv preprint 2019. ---------------------------------------------------------------------------------------------------------------------- I appreciate the authors’ response and the additional experimental results. I consider the plot of the coreset archetypes on a toy experiment insightful and it might be a relevant addition to the appendix. In my opinion, the submission constitutes a relevant contribution to archetypal analysis which makes it more feasible in real-world applications and provides some theoretical guarantees. Therefore, I raise my assessment to accept.
- Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible.
NIPS_2022_1708
NIPS_2022
Scalability: The proposed encoding method is templated-based (Line 155-156). Although the input encoding scheme (Section 7.1) may be a trivial problem, the encoding scheme may still affect the performance. Searching for the optimal encoding scheme is an expensive process, which may bring a high cost of hand-crafted engineering. Besides, the data gathering method also relies on hand-designed templates (Line 220). Presentation: The related work of PLM is adequately cited. But the authors should also introduce the background of policy learning so that the significance of this work can be highlighted. Performance: Compared to the work that uses traditional networks like DQN, the integration of PLM may affect the inference speed. Clarity: Most parts of this paper are well written. However, there are some typos in the paper: Line 53: pretrained LMs -> pre-trained LMs Line 104: language -> language. (missing full stop mark) Some papers should be cited in a proper way: Line 108: [23], Line 109: [36], Line 285:[15], Line 287 [15]. For example, in Line 108, "[23] show that" needs to be rewritten as "Frozen Pretrained Transformer (FPT) [23] show that". [Rebuttal Updates] The authors provided the additional experiments for addressing my concern of scalability. The authors also revised the typos and added the related works. Societal Impact: No potential negative societal impact. The authors provide a new perspective to aid policy learning with a pre-trained language model. Limitation: 1) Building text descriptions for each task still requires human labor. We do not know what textual format is optimal for policy learning. It varies from task to task, model to model. On the other hand, as I stated in Question 1, the long-text input could restrict the scalability of this framework. 2) The proposed methods also need humans to design some templates/rules, as the authors mentioned in the conclusion part.
1) Building text descriptions for each task still requires human labor. We do not know what textual format is optimal for policy learning. It varies from task to task, model to model. On the other hand, as I stated in Question 1, the long-text input could restrict the scalability of this framework.
NIPS_2022_1637
NIPS_2022
1. The examples of scoring systems in the Introduction seem out of date, there are many newer and recognized clinical scoring systems. It also should briefly introduce the traditional framework of the scoring system and its difference in methodology and performance with the proposed method. 2. As shown in figure 3, the performance improvement of proposed methods seems not so significant, the biggest improvement in the bank dataset was ~0.02. Additionally, using some tables to directly show the key improvements may be more intuitive and detailed. 3. Although extensive experiments and discussion on performance, in my opinion, its most significant improvement would be efficiency, and there are few discussions or ablation experiments on efficiency. 4. The model AUC can assess the model discriminant ability, i.e., the probability of a positive case is bigger than that of a negative case, but may be hard to show its consistency between predicted score and actual risk. However, this consistency may be more crucial to the clinical scoring system (differentiated with classification task). Therefore, the related studies are encouraged to conduct calibration curves to show the agreement. It would be better to prove the feasibility of the generated scoring system? The difference between the traditional method and our method can also be discussed in this paper.
2. As shown in figure 3, the performance improvement of proposed methods seems not so significant, the biggest improvement in the bank dataset was ~0.02. Additionally, using some tables to directly show the key improvements may be more intuitive and detailed.
ICLR_2021_1043
ICLR_2021
, which justify the score: • The theoretical developments presented in the paper build on the Rademacher complexity, but ignore the conclusions drawn by Zhang et al. in Section 2.2 of their ICLR 2017 paper (Understanding deep learning requires rethinking generalization). • The theoretical developments build on the assumption that (i) there exists a lower bound, valid for any input, to the distance between the output of each pair of neurons, and (ii) the proposed diversity loss increases this lower bound. Those two assumptions are central to the theoretical developments, but are quite arguable. For example, a pair of neuron that is not activated by a sample, which is quite common, leads to a zero lower bound. • Experimental validation are not convincing. Only shallow networks are considered (2 or 3 layers), and the optimization strategy, including the grid search strategy for hyperparameters selection, is not described. Minor issue: positioning with respect to related works is limited. For example, layer redundancy (which is the opposite of diversity) has been considered in the context of network pruning: https://openaccess.thecvf.com/content_CVPR_2019/papers/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.pdf
• Experimental validation are not convincing. Only shallow networks are considered (2 or 3 layers), and the optimization strategy, including the grid search strategy for hyperparameters selection, is not described. Minor issue: positioning with respect to related works is limited. For example, layer redundancy (which is the opposite of diversity) has been considered in the context of network pruning: https://openaccess.thecvf.com/content_CVPR_2019/papers/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.pdf
z62Xc88jgF
ICLR_2024
1. Although the use of this type of loss in this setting might be new, this work does not prove any new theoretical results. 2. That being said, experiment is a very important component in this paper, however, I find the evaluation metric of the solution very interesting. More specifically, let $u$ be the output of neural networks and $u^*$ be the exact solution. The test error is usually computed using relative $L^2$ norm (See for example [1][2]), i.e. $$|| u - u^*||_2^2 / ||u^*||_2^2 = \int|u - u^*|^2dx / \int |u^*|^2 dx.$$ However, in Figure 4, when evaluating solutions, the mean error is computed using equation (15), the energy norm. (i). why not using the relative $L^2$ norm? How does Astral loss perform if the evaluation is done in $L^2$? (ii). The a posteriori error bound is in the energy norm, i.e. $$L(u, w_L) \leq |||u-u^*||| \leq U(u, w_U).$$ so I would naturally expect Astral loss to achieve fairly small error in this energy norm, but this does not necessarily imply the solution is "better". Equations can be solved in different spaces. In fact, I think the space $L^2$ is more commonly used when people study existence and uniqueness of PDE solutions. (iii). There could be a relation between the energy norm and $L^2$ norm. More explanation is needed for the specific choice of the evaluation metric since it differs from the previous literature. [1] Li et al., Physics-Informed Neural Operator for Learning Partial Differential Equations [2] Wang et al., An Expert's Guide to Training Physics-informed Neural Networks
1. Although the use of this type of loss in this setting might be new, this work does not prove any new theoretical results.
ICLR_2022_1014
ICLR_2022
1. It seems to me that a very straightforward hypothesis about these two parts would be that the trivial part is what’s very simple, either highly consistent to what’s in the training set, or the images with very typical object pose in the center of the images; and for the impossible part, it might be the images with ambiguous labels, atypical object pose or position. I think the human test results would support this hypothesis, but I wonder whether the authors could provide more evidence to either prove or disprove this hypothesis. 2. The figure 6 is very confusing to me. The caption says that the right part is original ImageNet test set, but the texts on the image actually say it’s the left part. If the texts on the image are right, then the right panel is the consistency on the validation images between the two parts. If I understand the experiments correctly, these results are for models trained on ImageNet training set without the trivial or the impossible part and then tested on ImageNet validation set without the two parts. Although it’s good to see the lower consistency, it should be compared to the consistency between models trained on the whole ImageNet training set and tested on ImageNet validation set without the two parts, which I cannot find. Is the consistency lower because of the changed training process or the changed validation set? 3. It is also unclear how surprising we should be towards the consistency distribution, is this a result of an exponential distribution of the general “identification” difficulty (most images are simple, then less and less are more difficult)?
1. It seems to me that a very straightforward hypothesis about these two parts would be that the trivial part is what’s very simple, either highly consistent to what’s in the training set, or the images with very typical object pose in the center of the images; and for the impossible part, it might be the images with ambiguous labels, atypical object pose or position. I think the human test results would support this hypothesis, but I wonder whether the authors could provide more evidence to either prove or disprove this hypothesis.
NIPS_2016_93
NIPS_2016
/ Major concerns: - It is difficult to evaluate whether the MovieQA result should be considered significant given that +10% gap exists between MemN2N on dataset with explicit answers (Task 1) and RBI + FP on dataset with other forms of supervision, especially Task 3. If I understood correctly, the different tasks are coming from the same data, but authors provide different forms of supervision. Also, Task 3 gives full supervision of the answers. Then I wonder why RBI + FP on task 3 (69%) is doing much worse than MemN2N on task 1 (80%). Is it because the supervision is presented in a more implicit way ("No, the answer is kitchen" instead of "kitchen")? - For RBI, they only train on rewarded actions. Then this means rewardless actions that get useful supervision (such as "No, the answer is Timothy Dalton." in Task 3) is ignored as well. I think this could be one significant factor that makes FP + RBI better than RBI alone. If not, I think the authors should provide stronger baseline than RBI (that is supervised by such feedback) to prove the usefulness of FP. Questions / Minor concerns: - For bAbI, it seems the model was only tested on single supporting fact dataset (Task 1 of bAbI). How about other tasks? - How is dialog dataset obtained from QA datasets? Are you using a few simple rules? - Lack of lexical / syntactic diversity of teacher feedback: assuming the teacher feedback was auto-generated, do you intend to turk the teacher feedback and / or generate a few different kinds of feedback (which is more real-life situation)? - How does other models than MemN2N do on MovieQA?
- For bAbI, it seems the model was only tested on single supporting fact dataset (Task 1 of bAbI). How about other tasks?
ICLR_2022_2318
ICLR_2022
Weakness: 1. This paper is built on the SPAIR framework and focuses on point cloud data, which is somehow incremental. 2. There is no ablation study to validate the effectiveness of the proposed components and the loss. 3. It is hard to follow Sec. 3.2. The author may improve it and give more illustrations and examples. 4. It is unclear how the method can work and decompose a scene into different objects. I did not see how Chamfer Mixture loss can achieve this goal. More explanation should go here.
3. It is hard to follow Sec. 3.2. The author may improve it and give more illustrations and examples.
ICLR_2022_2070
ICLR_2022
Weakness: 1 The idea is a bit too straightforward, i.e., using the attributes of the items/users and their embeddings to bridge any two domains. 2 The technical contribution is limited, i.e., there is no significant technical contribution and extension based on a typical model for the cross-domain recommendation setting.
2 The technical contribution is limited, i.e., there is no significant technical contribution and extension based on a typical model for the cross-domain recommendation setting.
ICLR_2021_842
ICLR_2021
1. Performance gains on downstream tasks of detection and instance segmentation are much lower -- how would the authors propose to improve these? 2. If the primary goal is to improve SSL performance on small models, I would have liked to see more analysis on how different design choices of setting up contrastive learning affect model performance and if these could aid performance improvement, in addition to knowledge distillation. Questions and suggestions: 1. Adding fully-supervised baselines for small models in table 1 will be useful in understanding the gap between full supervision and SSL for these models. 2. In figure 3, does 100% (green line) represent the student network trained with 100% of labeled imagenet supervised data? It is hard to interpret what these numbers represent. 3. Minor point: Some citations, which should not be in parentheses, are in parentheses (e.g., Romero et al. page 8). Please fix this in the revision.
1. Adding fully-supervised baselines for small models in table 1 will be useful in understanding the gap between full supervision and SSL for these models.
ICLR_2022_445
ICLR_2022
Weakness: Method: 1. Novelty: Incremental Contribution: The proposed LaMOO is a direct generalization from the LaMCTS method to multi-objective optimization (MOO). The novel part is to use dominance number as criteria for search space partition and hypervolume for promising region selection. These are all straightforward generalizations for MOO. The contribution of this work is somewhat incremental along the line of LaNAS, LaMCTS, and LaP^3. Missing Closely Related Approaches: This work claims the proposed approach to learn the promising region is fundamentally different from the previous works. However, many classification-based search space partition methods have been proposed in the machine learning community, see [1][2][3] (classification + random sampling). (Tree-based) space partition methods have been widely used for black-box optimization [4][5][6]. In addition, there are also different works on classification-based MOO [7] (SVM + NSGA-II/MO-CMA-ES) [8] (Ordinal SVM + NSGA-II) [9]. 2. Theoretical Analysis: A large part of this work is on the theoretical understanding for space partition and LaMCTS. However, the analysis is mostly for single-objective optimization, and the extension to multi-objective optimization is much less promising. 3. Why LaMOO Works: Further discussions are needed to clearly clarify the properties of LaMOO. Dominance-based Approach for Many Objective Optimization: LaMOO uses the dominance number as the split criteria to train the SVM models and partition the search space. However, the dominance-based method is typically not good for many objective optimization due to the lack of dominance pressure (e.g., all solutions are non-dominated with each other, and all have the same dominated number). Why is LaMOO still good for many objective optimization? Combination with Multi-Objective Bayesian Optimization (MOBO): It is straightforward to see the benefit of using LaMOO with model-free optimization (e.g., NSGA-II and MO-CMA-ES). However, it is not so clear to understand why it also works for MOBO (e.g., qEHVI). The qEHVI approach already builds (global) Gaussian process models to approximate each objective function, and uses hypervolume-based criteria to select the most promising solution(s) (e.g., maximizing the expected hypervolume improvement) for evaluation. Therefore, its selected solution(s) should be already on the approximate Pareto front without the LaMOO approach. Is the good performance due to only use solutions in the promising region to build the models (but I think GP would work well with all data as in the setting considered in this work)? Or because LaMOO restricts the search in the region close to the current best non-dominated solutions (then what is the relation to the trust-region approach [10])? Exploitation v.s. Exploration: With LaMOO, the solutions can only be selected from the most promising region (e.g., around the current Pareto front), which is good for exploitation. However, will this approach lead to worse overall performance due to the lack of exploration (e.g., cannot find more diverse Pareto solutions far from the current Pareto front)? 4. Time Complexity: What is the time complexity of the proposed algorithm? In each step of LaMOO, it has to repeatedly calculate the hypervolume of different regions for promising region selection. However, the computation of hypervolume could be time-consuming, especially for problems with many objectives (e.g., >3). Would it make LaMOO impractical for those problems? 5. Inaccurate Description for MOO Methods: CMA-ES: CMA-ES is a widely-used single objective optimization algorithm [11]. The multi-objective version proposed in (Igel et al., 2007a) is usually called MO-CMA-ES. It is also confusing why most citation for the MO-CMA-ES (in the main paper and Table 1) is for the steady-state updated version (Igel et al., 2007b) but not for the original paper (Igel et al., 2007a). ParEGO: The seminal algorithm proposed in Knowles (2006) is called ParEGO and the qParEGO is a parallel extension recently proposed in Daulton et al. (2020). It is not suitable to refer the algorithm in Knowles (2006) as qParEGO in Table 1 and the main text. MOEA/D: In my understanding, MOEA/D is suitable for many objective optimization (objectives > 3), see its performance in the NSGA-III paper (Deb & Jain, 2014), while the main challenge is how to specify the weight vector for a new problem with unknown Pareto front as correctly pointed out in this work. Hypervolume-based Method: This work indicates the indicator-based method is better for many objective optimization. However, the time complexity and expensive calculation could make the hypervolume-based method impractical for many-objective optimization. Experiment: 6. Missing Experimental Setting: Many important experiment settings are missing in this work, such as the number of initial solutions for MOBO (and its generation method), the number of batched solutions for MOBO (e.g., q), the reference point for hypervolume (during the optimization, and for the final evaluation), the ground truth Pareto front used for calculating the log hypervolume difference for real-world problems (e.g., Nasbench 201). 7. Comparison to Model-Free Evolutionary Algorithm: It is reasonable that LaMOO can improve the MO-CMA-ES performance since it builds extra models to allocate computation to the most promising region. However, in my understanding, the model-free evolutionary algorithms are not designed for expensive optimization, and their typical use case is with a large number of cheap evaluations with a fast run time. It is more interesting to directly compare LaMOO with other model-based methods (e.g., MO-CMA-ES with GP models). 8. MOBO Performance: What are the hyperparameters for qEHVI? It seems its performance on VehicleSafty problem is worse than those reported in the original paper Daulton et al. (2020). 9. Wall Clock Run Time: Please report the wall clock run time for both LaMOO and other model-free/model-based algorithms, as in Daulton et al. (2020). Minor Issues: When citing multiple works, please put them in chronological order. Reference: [1] Hashimoto, Tatsunori, Steve Yadlowsky, and John Duchi. Derivative free optimization via repeated classification. AISTATS 2018. [2] Kumar, Manoj, George E. Dahl, Vijay Vasudevan, and Mohammad Norouzi. Parallel architecture and hyperparameter search via successive halving and classification. arXiv:1805.10255. [3] Yu, Yang, Hong Qian, and Yi-Qi Hu. Derivative-free optimization via classification. AAAI 2016. [4] Munos, Rémi. Optimistic optimization of a deterministic function without the knowledge of its smoothness. NeurIPS 2011. [5] Ziyu Wang, Babak Shakibi, Lin Jin, and Nando de Freitas. Bayesian multi-scale optimistic optimization. AISTATS 2014. [6] Kenji Kawaguchi, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Bayesian optimization with exponential convergence. NeurIPS 2015. [7] Loshchilov, Ilya, Marc Schoenauer, and Michèle Sebag. A mono surrogate for multiobjective optimization. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, 2010. [8] Seah, Chun-Wei, Yew-Soon Ong, Ivor W. Tsang, and Siwei Jiang. Pareto rank learning in multi-objective evolutionary algorithms. In 2012 IEEE Congress on Evolutionary Computation, 2012. [9] Pan, Linqiang, Cheng He, Ye Tian, Handing Wang, Xingyi Zhang, and Yaochu Jin. A classification-based surrogate-assisted evolutionary algorithm for expensive many-objective optimization. IEEE Transactions on Evolutionary Computation 2018. [10] Daulton, Samuel, David Eriksson, Maximilian Balandat, and Eytan Bakshy. Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces. arXiv:2109.10964, 2021. [11] Hansen, Nikolaus, and Andreas Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 2001. [12] Ishibuchi, Hisao, Yu Setoguchi, Hiroyuki Masuda, and Yusuke Nojima. Performance of decomposition-based many-objective algorithms strongly depends on Pareto front shapes. IEEE Transactions on Evolutionary Computation 21, no. 2 (2016): 169-190.
4. Time Complexity: What is the time complexity of the proposed algorithm? In each step of LaMOO, it has to repeatedly calculate the hypervolume of different regions for promising region selection. However, the computation of hypervolume could be time-consuming, especially for problems with many objectives (e.g., >3). Would it make LaMOO impractical for those problems?
NIPS_2018_219
NIPS_2018
Weakness: 1. I found the paper hard to follow. Unfamiliar with local differential privacy, I found it hard to comprehend. The definition is in Section 2. I would urge the authors to present it in Section 1 2. The accuracy estimates provided in the paper are probabilistic. Without proper experiments it is impossible to judge the tradeoff between privacy and accuracy. This paper does not provide any expt results 3. Since this is an iterative system, how scalable is the method? This is very important to understand this, since the authors guarantee diff privacy after each epoch. There is a cost to pay for this in terms of the "delay" 4. From the simple problem of average of bits, how can we do go more complex data at each user? 5. No conclusion is provided Updated after Author response: I am still not happy that the authors did not do any expts. While theoretical results only provide a bound, the usefulness can only be found by thorough evaluation. I would also urge the authors to add a conclusion section since the takeaways become more informative after reading the whole paper.
5. No conclusion is provided Updated after Author response: I am still not happy that the authors did not do any expts. While theoretical results only provide a bound, the usefulness can only be found by thorough evaluation. I would also urge the authors to add a conclusion section since the takeaways become more informative after reading the whole paper.
NIPS_2020_1476
NIPS_2020
- As mentioned before, the dataset used in the experiments are all very small. It would be more convincing to see some result on medium or even large dataset such as ImageNet. But this is just a minor issue and it will not affect the overall quality of the paper. - Which model did you used in section 5 for image recognition task? To some extend it show the capability of Augerino on this task. However, on image recognition the network architecture strongly affect the result. It is interested to see what kind of chemical reaction will take place between Augrino and difference DNN architectures. --------------- after rebuttal ----------------- - Regarding the authors' response and all the other review comments I am agree with R4, that in this paper there is still some important issues needed to be re-worked before publication. I thus decided to lower my rating. I would like to encourage the authors to re-submit after revision.
- As mentioned before, the dataset used in the experiments are all very small. It would be more convincing to see some result on medium or even large dataset such as ImageNet. But this is just a minor issue and it will not affect the overall quality of the paper.
NIPS_2018_109
NIPS_2018
that limit the contribution. In particular: 1) the method does not seem to improve on the robustness and sensitivity as claimed in the motivation of using evolutionary methods in the first place. In fig 3 the new method is noisier in 2 domains and equally noisy as the competitors in the rest. 2) The paper claims SOTA in these domains compared to literature results. The baseline results reported in the paper under review in HalfCheetah, Swimmer and Hopper are worse the state of the art reported in the literature [1,2]. Either because the methods used achieved better results in [1,2] or because the SOTA in the domain was TRPO which was not reported in the paper under review. SOTA is a big claim; support it carefully. 3) There is significant over-claiming throughout the paper. E.g line 275 "best of both approaches", line 87 "maximal information extraction" 4) It is not clear why async methods like A3C were not discussed or compared against. This is critical given the SOTA claim 5) The paper did not really attempt to tease apart what was going on in the system. When evaluating how often was DDPG agent chosen for evaluation trials or did you prohibit this? What does the evaluation curve look like for the DDPG agent? That is do everything you have done, but evaluate the DDPG agent as the candidate from the population. Or allowing the population to produce the data for the DDPG and training it totally off-policy to see how well the DDPG learnings (breaking the bottom right link in fig 1). Does adding more RL agents help training? The paper is relatively clear and the is certainly original. However my concerns above highlight potential issues with quality and significance of the work. [1] https://arxiv.org/abs/1709.06560 [2] https://arxiv.org/pdf/1708.04133.pdf ++++++++++++ Ways to improve the paper that did not impact the scoring above: - you have listed some of the limitations of evolutionary methods, but I think there are much deeper things to say regarding leveraging state, reactiveness, and learning during an episode. Being honest and direct would work well for this work - the title is way to generic and vague - be precise when being critical. What does "brittle convergence properties mean" - I would say DeepRL methods are widely adopted. Consider the landscape 10 years ago. - claim V-trace is too expensive. I have no idea why - its important to note that evolutionary methods can be competitive but not better than RL methods - discussion starting on line 70 is unclear and seems not well supported by data. Say something more plain and provide data to back it up - definition of policy suggests deterministic actions - not sure what state space s = 11 means? typo - section at line 195 seems repetitive. omit
- you have listed some of the limitations of evolutionary methods, but I think there are much deeper things to say regarding leveraging state, reactiveness, and learning during an episode. Being honest and direct would work well for this work - the title is way to generic and vague - be precise when being critical. What does "brittle convergence properties mean" - I would say DeepRL methods are widely adopted. Consider the landscape 10 years ago.
ICLR_2021_2674
ICLR_2021
Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below: - the authors leverage multiple datasets, including building their own to train the model. However, different dataset is captured by different cameras, and thus the focusing distance, aperture settings, and native image resolution all affect the circle of confusion, how are those ambiguities taken into consideration during training? - related to the point above, the paper doesn't describe the pre-processing stage, neither did it mention how the image is passed into the network. Is the native resolution preserved, or is it downsampled? - According to Held et al "Using Blur to Affect Perceived Distance and Size", disparity and defocus can be approximated by a scalar that is related to the aperture and the focus plane distance. In the focal stack synthesis stage, how is the estimated depth map converted to a defocus map to synthesize the blur? - the paper doesn't describe how is the focal stack synthesized, what's the forward model of using a defocus map and an image to synthesize defocused image? how do you handle the edges where depth discontinuities happen? - in 3.4, what does “Make the original in-focus region to be more clear” mean? in-focus is defined to be sharpest region an optical system can resolve, how can it be more clear? - the paper doesn't address handling textureless regions, which is a challenging scenario in depth from defocus. Related to this point, how are the ArUco markers placed? is it random? - fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well? - what is the limit of the amount of blur presented in the input that the proposed models would fail? Are there any efforts in testing on smartphone images where the defocus is *just* noticeable by human eyes? how do the model performances differ for different defocus levels? Minor suggestions - figure text should be rasterized, and figures should maintain its aspect ratio. - figure 3 is confusing as if the two nets are drawn to be independent from each other -- CNN layers are represented differently, one has output labeled while the other doesn't. It's not labeled as the notation written in the text so it's hard to reference the figure from the text, or vice versa. - the results shown in the paper are low-resolution, it'd be helpful to have zoomed in regions of the rendered focal stack or all-in-focus images to inspect the quality. - the sensor plane notation 's' introduced in 3.1 should be consistent in format with the other notations. - calling 'hyper-spectral' is confusing. Hyperspectral imaging is defined as the imaging technique that obtains the spectrum for each pixel in the image of a scene.
- the paper doesn't describe how is the focal stack synthesized, what's the forward model of using a defocus map and an image to synthesize defocused image? how do you handle the edges where depth discontinuities happen?
NIPS_2016_9
NIPS_2016
Weakness: The authors do not provide any theoretical understanding of the algorithm. The paper seems to be well written. The proposed algorithm seems to work very all on the experimental setup, using both synthetic and real-world data. The contributions of the papers are enough to be considered for a poster presentation. The following concerns if addressed properly could raise to the level of oral presentation: 1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well. 2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this. 3. It would be good to show some empirical evidence that the proposed algorithm works better for Column Subset Selection problem too, as claimed in the third contribution of the paper.
2. The first claimed contribution of the paper is that unlike other existing algorithms, the proposed algorithm does not take as many points or does not need apriori knowledge about dimensions of subspaces. It would have been better if there were some empirical justification about this.
NIPS_2021_537
NIPS_2021
Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions. 2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper. Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries. 7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble? I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper. References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020. The limitation and social impacts are briefly discussed in the conclusion.
2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C.
NIPS_2021_2224
NIPS_2021
. 1. The proposed S1DB-ED algorithm is too similar to RMED (Komiyama et al. 2015), so I think the novelty of this part is limited. The paper needs to give a sufficient discussion on the comparison with RMED. 2. The comparison baselines in experiments are not sufficient. The paper only compares the proposed two algorithms, so readers cannot evaluate the empirical performance of the proposed algorithms. While I understand that this is a new problem and there are no other existing algorithms for this problem, the paper can still compare to some ablation variants of proposed algorithms to demonstrate the effectiveness of key algorithmic components, or reduce the setting to conventional dueling bandits and compare with existing dueling bandit algorithms. After Rebuttal I read the rebuttal of the authors. Now I agree that the analysis for the S1DB-ED algorithm is non-trivial and the authors correct the errors in prior work [21]. My concerns are well addressed. So I will keep my score.
.1. The proposed S1DB-ED algorithm is too similar to RMED (Komiyama et al. 2015), so I think the novelty of this part is limited. The paper needs to give a sufficient discussion on the comparison with RMED.
PCm1oT8pZI
ICLR_2024
1. The authors do not give a comprehensive discussion of previous work on this topic. 2. The experimental justification of this work is not sufficient, only compared to the basic backdoor-based strategy.
1. The authors do not give a comprehensive discussion of previous work on this topic.
ARR_2022_169_review
ARR_2022
1. The paper claims that it exploits unlabelled target language data. However, in line 363, it seems that the paper actually uses event-presence labels $e_{i}$ for each target language sample. First, $e_{i}$ is probably extracted directly from labels $y_{i}$; it is when $y_{i}$ says that some word is an event trigger that one can know that $e_{i}=1$. So, for target language data, their labels are actually used in an indirect way. Thus the method is not totally using pure "unlabelled" target language data as the paper claims. Second, I think $e_{i}$ provides super crucial information which might be responsible for most of the gain derived. To make fair comparisons with the baselines, I think baseline methods BERT-CRF in section 3.2 and the BERT-CRF+MLM in 3.4 should also see $e_{i}$ labels. 2. Also concerning $e_{i}$ in weakness point 1 above, it is not known how $e_{i}$ and $e_{i}$'s distributions look like at all. I could only guess $e_{i}$ is 0,1 binary variables? Since all SRC and TRG data comes from Cross-Lingual Event Detection datasets, maybe most samples do have an event trigger and thus most $e_{i}$s equal 1. 3. It is confusing in line 339: s$\sim$p(s) and t$\sim$p(t). Do p(s) and p(t) here the ones calculated and updated in equation (6-7) in lines 369-370? Or maybe it is fixed since each sample already has a ground truth $e_{i}$. If it is the former case, I think it might be a little weird to predict the p(s) and p(t) which the paper uses to draw samples, because p(s) and p(t) are already given since $e_{i}$s are known for all samples? 4. The authors did not justify why Optimal Transport (OT) is used and did not elaborate on what are OT's advantages. One simple substitute for OT is average euclidean distance or average cosine similarity, which can be used to replace the paper's equation (8). There are more substitutes to OT, such as the KL divergence or the Jensen-Shannon divergence (which are commonly used to make comparisons with OT). It is worth comparing OT with say, Euclidean distance, KL divergence as a side experiment. All these simple substitutes are probably super-efficient and quicker to compute than OT. 5. It is not known if the OT sample selection process in 2.4.3 only runs once or runs iteratively as EP module is updated during the training steps. Are optimizing the loss of equation (10), i.e. the training steps, and solving OT in equation (3) conducted by turns iteratively? It will be much easier for readers to know the whole process if more details and a flow chart can be added. Furthermore, what is the runtime for solving the entropic regularized discrete OT problem, and the runtime for OT sample selection? 6. It is claimed in lines 128-132 that "it would be beneficial for the LD to be trained with examples containing events". The statement in lines 137-148 also focuses only on LD. Why only LD benefits from seeing examples containing events? Do the text encoders also benefit from seeing these examples? A clue that the encoder might benefit from unlabelled data is in 3.4's result where simply MLM fine-tuning can derive considerable gains. 7. In section 3.4, the result shows that simple MLM fine-tuning on unlabelled target language data derives considerable gains against BERT-CRF baseline. I was curious if the authors could do BERT-CRF + MLM + EP like in equation (10), can the performance be better than ALA? If true, it might show that a simple MLM is better than adversarial training. 1. The writing is not fluent enough. Some typos and awkward/redundant/unnatural sentences such as lines 019, 041-043. 2. Using Optimal Transport (OT), or more specifically leveraging the Wasserstein Distance, in GAN is first seen in the Wasserstein GAN paper, i.e. WGAN (Arjovsky et al. ICML 2017). It might be beneficial to discuss WGAN a bit or even add WGAN as a baseline method. 3. The paper should elaborate on OT in both the introduction and the methodology parts and should provide more details and justifications for OT. 4. In equation (4), L2 distance is used. In OT, earth mover's distance is more common. What is the benefit of L2 distance? 5. I hope to see the authors' response in resubmission (if rejected) or clarifications in the camera-ready (if accepted) to remove my concern.
5. It is not known if the OT sample selection process in 2.4.3 only runs once or runs iteratively as EP module is updated during the training steps. Are optimizing the loss of equation (10), i.e. the training steps, and solving OT in equation (3) conducted by turns iteratively? It will be much easier for readers to know the whole process if more details and a flow chart can be added. Furthermore, what is the runtime for solving the entropic regularized discrete OT problem, and the runtime for OT sample selection?
NIPS_2021_2418
NIPS_2021
- The class of problems is not very well motivated. The CIFAR example is contrived and built for demonstration purposes. It is not clear what application would warrant sequentially (or in batches) and jointly selecting tasks and parameters to simultaneously optimize multiple objective functions. Although one could achieve lower regret in terms of total task-function evaluations by selecting the specific task(s) to evaluate rather than evaluating all tasks simultaneously, the regret may not be better with respect to timesteps. For example, in the assemble-to-order, even if no parameters are evaluated for task function (warehouse s) at timestep t, that warehouse is going to use some (default) set of parameters at timestep t (assuming it is in operation---if this is all on a simulator then the importance of choosing s seems even less well motivated). There are contextual BO methods (e.g. Feng et al 2020) that address the case of simultaneously tuning parameters for multiple different contexts (tasks), where all tasks are evaluated at every timestep. Compelling motivating examples would help drive home the significance of this paper. - The authors take time to discuss how KG handles the continuous task setting, but there are no experiments with continuous tasks - It’s great that entropy methods for conditional optimization are derived in Section 7 in the appendix, but why are these not included in the experiments? How does the empirical performance of these methods compare to ConBO? - The empirical performance is not that strong. EI is extremely competitive and better in low-budget regimes on ambulance and ATO - The performance evaluation procedure is bizarre: “We measure convergence of each benchmark by sampling a set of test tasks S_test ∼ P[s] ∝ W(s) which are never used during optimization”. Why are the methods evaluated on test tasks not used during the optimization since all benchmark problems have discrete (and relatively small) sets of tasks? Why not evaluate performance on the expected objective (i.e. true, weighted) across tasks? - The asymptotic convergence result for Hybrid KG is not terribly compelling - It is really buried in the appendix that approximate gradients are used to optimize KG using Adam. I would feature this more prominently. - For the global optimization study on hybrid KG, it would be interesting to see performance compared to other recent kg work (e.g. one-shot KG, since that estimator formulation can be optimized with exact gradients) Writing: - L120: this is a run-on sentence - Figure 2: left title “poster mean” -> “posterior mean” - Figure 4: mislabeled plots. The title says validation error, but many subplots appear to show validation accuracy. Also, “hyperaparameters” -> hyperparameters - L286: “best validation error (max y)” is contradictory - L293: “We apply this trick to all algorithms in this experiment”: what is “this experiment”? - The appendix is not using NeurIPS 2021 style files - I recommend giving the appendix a proofread: Some things that jump out P6: “poster mean”, “peicewise-linear” P9: “sugggest” Limitations and societal impacts are discussed, but the potential negative societal impacts could be expounded upon.
- The authors take time to discuss how KG handles the continuous task setting, but there are no experiments with continuous tasks - It’s great that entropy methods for conditional optimization are derived in Section 7 in the appendix, but why are these not included in the experiments? How does the empirical performance of these methods compare to ConBO?
hkjcdmz8Ro
ICLR_2024
1. The technique contribution is week. The proposed method utilizes the LLM to refine the prompt. Thus, the performance of the proposed method heavily relies on the designed system prompt and LLMs. Moreover, the proposed method is based on heuristics, i.e., there is no insight for the proposed approach. But I understand those two points could be very challenging for LLM research. 2. The evaluation is not systematic. For instance, only 50 questions are used in the evaluation. Thus, it is unclear whether the proposed approach is generalizable. More importantly, is the judge model the same for the proposed algorithm and evaluation? If this is the case, it is hard to see whether the reported results are reliable as LLMs could be inaccurate in their predictions. It would be better if other metrics could be used for cross-validation, e.g., manually check and the word list used by Zou et al. 2023. The proposed method is only compared with GCG. There are also many other baselines, e.g., handcrafted methods (https://www.jailbreakchat.com/). 3. In GCG, authors showed that their approach could be transferred to other LLMs. Thus, GCG could craft adversarial prompts and transfer them to other LLMs. It would be good if such a comparison could be included. A minor point: The jailbreaking percentage is low for certain LLMs.
3. In GCG, authors showed that their approach could be transferred to other LLMs. Thus, GCG could craft adversarial prompts and transfer them to other LLMs. It would be good if such a comparison could be included. A minor point: The jailbreaking percentage is low for certain LLMs.
NIPS_2021_835
NIPS_2021
The authors addressed the limitations and potential negative societal impact of their work. However, there are some concerns as follows: 1.The main concern is the innovation of this paper. Firstly, laplacian score is proposed by Ref.13 for feature selection as an unsupervised measure. Secondly, i think that the main contribution of this paper is stochastic gates, but in Ref.36, the technology of stochastic gates is already used in supervised feature selection. Finally, authors focus on the traditional unsupervised feature selection problem. Thus i think that the core contribution of this paper is that authors extend the supervised problem in Ref.36 to the unsupervised problem without theoretical guarantees. Even authors introduce the importance of unsupervised feature selection from a diffusion perspective, but i don't think this is the core contribution of this article. For this question, if the authors can persuade me , I will change my score. 2.Authors introduce the importance of unsupervised feature selection from a diffusion perspective and i think this is a very novel thing for feature selection, but i can't understand what is the difference between similarity and exit times in nature. I hope the author can give me a more detailed explanation to understand the difference. 3.Authors sample a stochastic gate (STG) vector in algorithm 1 and thus i think that the proposed method should have randomness. But in the main experiment of this paper, i don't see this randomness analyzed by authors. 4.It would be better if the authors add some future work.
2.Authors introduce the importance of unsupervised feature selection from a diffusion perspective and i think this is a very novel thing for feature selection, but i can't understand what is the difference between similarity and exit times in nature. I hope the author can give me a more detailed explanation to understand the difference.
NIPS_2022_1250
NIPS_2022
Lacking of discussions or motivations for the importance of the proposed idea Empirical results: Can be on toy tasks The paper pursues an interesting research direction, which tries to unify existing POMDP formalisms. The approach looks very promising. The proposed design of the critic is very interesting. It would become very interesting if the paper can provides some basic empirical results on toy tasks to show all important claim in practice. - As the unified framework can now obtain provably efficient learning for most POMDP formalisms. Is there any limitations of its, e.g. can it do the same for any general POMDP formulations (continuous or infinite spaces)? - How can one understand agnostic learning? In Algorithm, is z just defined as historical observations? Or is it in the form of belief?
- As the unified framework can now obtain provably efficient learning for most POMDP formalisms. Is there any limitations of its, e.g. can it do the same for any general POMDP formulations (continuous or infinite spaces)?
ICLR_2023_4411
ICLR_2023
Weakness • The reviewer thinks the authors need to elaborate how the output labels are defined for density assessment. In section 3. Datasets, it seems the authors gives confusing definitions of density and BIRADS findings like “we categorized BI-RADS density scores into two separate categories: BI-RADS 2 and 3 as benign and BI-RADS 5 and 6 as malignant”. There is no description about what “Density A”, “Density B”, “Density C”, and “Density D” mean. Also, as the reviewer knows, benign or malignant classification can be confirmed with biopsy results not BIRADS scores. Even though the reviewer is not familiar with the two public datasets, the reviewer thinks the datasets should have biopsy information to annotate lesions whether malignant or benign. • As a preprocessing step, the authors segmented and removed the region of the pectoral muscle from MLO views. However, the authors did not explain how the segmentation model was developed (they just mentioned employed the prior work) and the review has a concern that important features can be removed from this preprocessing step. It might be useful to compare model performance using MLO views with and without this preprocessing step to confirm the benefit of this pectoral muscle removal. • How did you calculate precision/recall/F1-score for 4-class classification of breast density? Also, for breast cancer detection, researchers usually report AUC with sensitivity and specificity at different operating points to compare model performance. It might be more informative to provide AUC results for comparisons. • The reviewer thinks comparison of their proposed approach with the single-view result is unfair. This is because information that multi views contain is 4x larger than the one that the single view has. So, to demonstrate the benefit of using the proposed fusion strategy, they need to report performance of multi-view results with simple fusion approach like the average/maximum of 4 view scores, or max over mean values of each breast. • Are the results reported in this study based on patient/study level? How did you calculate performance when using single views? Did you assume that each study has only one view? • What fusion strategy was used for results in Table 2? Are these results based on image level?
• How did you calculate precision/recall/F1-score for 4-class classification of breast density? Also, for breast cancer detection, researchers usually report AUC with sensitivity and specificity at different operating points to compare model performance. It might be more informative to provide AUC results for comparisons.
bpArUWbkUF
EMNLP_2023
- There are some minor issues with the papers, but still, no strong reasons to reject them: - I found that the creation of the dataset is optional. The Kialo dataset, well-studied in the community, provides exactly what the authors need, pairs of short claims and their counters. It is even cleaner than the dataset the authors created since no automatic processes exist to construct it. Still, what has been created in this paper can be extra data to learn from. - The related work, especially regarding counter-argument generation, was shortly laid out with little elaboration about how previous works addressed that task and the implications. - In the abstract, the authors claim they trained Arg-Judge with human preference. However, looking at the details of the data that the model is trained on, it turns out that the data is automatically created and does not precisely reflect human preferences. - The procedure of creating the seed instructions, expanding them, and mapping them to inputs needed to be clarified. Providing examples here would be very helpful.
- I found that the creation of the dataset is optional. The Kialo dataset, well-studied in the community, provides exactly what the authors need, pairs of short claims and their counters. It is even cleaner than the dataset the authors created since no automatic processes exist to construct it. Still, what has been created in this paper can be extra data to learn from.
NIPS_2021_2050
NIPS_2021
1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification. 2. This work only focuses on a niche task, which is more suitable for CV conference like CVPR rather than machine learning conference. The audience should be more interested in techniques that can work for general tasks, like general image retrieval. 3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released.
1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification.
ICLR_2021_1849
ICLR_2021
I see in this paper are: - Although there is a clear and formal explanation of why it is not possible to discriminate among classes from different task when there is no access to data from those previous classes, I am not fully convinced that the set of parameters kept from previous classes, and used in regularization-based approaches, do not represent to some extent this data. In particular, there is no clear argument for the claim on page 5: “However, by hypothesis, \omega_{t-1} does not model the data distribution from C_{t-1} and therefore it does not model data distribution from C_{t-1} classes.”. I would like to see some discussion regarding how fairly a set of parameters \theta_{t-1} would represent the S’ set. - In terms of the experiments, I consider the number of tasks quite limited. To be convinced I would like to see several tasks (at least 10) and sequential results in terms of tasks learned rather than epochs. Questions for authors: Please address my comments on the weaknesses above.
- In terms of the experiments, I consider the number of tasks quite limited. To be convinced I would like to see several tasks (at least 10) and sequential results in terms of tasks learned rather than epochs. Questions for authors: Please address my comments on the weaknesses above.
ICLR_2022_212
ICLR_2022
Weakness: 1. The introduction of the motivation (the concept of in-context bias) is not easy to understand at the very beginning. The paper said: “the pretrained NLM can model much stronger dependencies between text segments that appeared in the same training example, than it can between text segments that appeared in different training examples.” Acutally it seems quite natural for me and I did not realize it is a problem until I saw more explanations in section 1.1. 2. The theory is a bit complicated and not easy to follow. 3. The experiments are limited. The authors only conduct the evaluation on sentence similarity tasks and open domain QA tasks. However, there are many other tasks that involve sentence pairs. For example, sentence inference tasks such as MNLI and RTE are common tasks in NLP field. The authors should conduct experiments on more types of sentence pair tasks.
3. The experiments are limited. The authors only conduct the evaluation on sentence similarity tasks and open domain QA tasks. However, there are many other tasks that involve sentence pairs. For example, sentence inference tasks such as MNLI and RTE are common tasks in NLP field. The authors should conduct experiments on more types of sentence pair tasks.
o3V7OuPxu4
ICLR_2025
Overall, the paper lacks clarity and depth in describing both the technical implementation and practical contributions. ### **Major comments** 1. Unclear contribution: The paper does not effectively justify why this benchmark must exist as a standalone contribution rather than an addition to existing Starcraft II resources. The contribution seems limited to a collection of scripts and metrics, which could likely be integrated into the existing environment without creating a separate benchmark. 2. Lack of implementation details: Key technical aspects of the implementation are insufficiently described, making it hard to understand the benchmark's novelty and how it's technically realized. Several things are not clear, such as: - Integration: How are LLM agents integrated with StarCraft II? How can users use the benchmark? Does the benchmark use a custom API or an interface for this? - Decision Tracking: How is decision-making tracked and analyzed? While Table 3 provides a decision trajectory, details of how this is analyzed and used are missing. - Computational Requirements: What hardware/software is necessary to run this benchmark effectively? This information is critical for usability but is absent. - Opponents: Are the LLMs evaluated with built-in agents or newly introduced opponents? The fact that agents are evaluated against built-in agents in Starcraft II is mentioned as a limitation, but it is unclear whether the authors change this in their benchmark. 3. Incomplete metric information: The metrics lack context. For instance, while Appendix A.1 outlines the metrics, there are no defined ranges, leaving the reader unsure of how to interpret scores. For example, how should a Real-Time Decision score of 21.12 versus 37.51 in Table 4 be interpreted? Similarly, terms such as “effective” actions in EPM or “collected vespene” are not unexplained, reducing the metrics’ interpretability (how do we know that these are the right metrics to assess decision-making and planning?). 4. Missing benchmark discussion and limitations: A discussion about future development and limitations of the benchmark is missing, which limits the reader's understanding of the benchmark's intended scope and future extensions. 5. Figure 2 indicates a large variance. Why are there no error bars in the tables? 6. It's important to have the prompt included in the appendix or supplement. Was it possibly in a supplement that I cannot access? ### **Minor comments** (These did not affect my score) - Abstract: Lines 016-019 are a bit difficult to understand; consider rephrasing - Figure 2: It’s unclear what this Figure is meant to convey, and the Figure lacks labeled y-axes. - In Section 4.3, line 367 states "Definitions and methods for these metrics will be further detailed in the figure 4.3." This seems to refer to a table, possibly Table 3, rather than a figure. - In Table 3, "OBSERVERtgreater" should probably be "OBSERVER." - Lines 323 + 350 state that screenshots illustrating decision traces will be provided in the appendix, but these are not included - I don't understand what is meant when the authors state that civilization and the other games are not "strategic and tactical" in Table 1. Additionally, Werewolf is clearly an imperfect information game. The authors should reconsider this table because I believe many of the entries are inaccurate. - Why is the score in Table 4 unnormalized? It's an incomprehensible number as it stands.
6. It's important to have the prompt included in the appendix or supplement. Was it possibly in a supplement that I cannot access? ### **Minor comments** (These did not affect my score) - Abstract: Lines 016-019 are a bit difficult to understand; consider rephrasing - Figure 2: It’s unclear what this Figure is meant to convey, and the Figure lacks labeled y-axes.
NIPS_2020_153
NIPS_2020
* Both this paper and the Nasr et al paper use number detectors (number selective units) as an indicator of number sense. However, the presence of number selective units is not a necessary condition for number sense. There are potentially other distributed coding schemes (other than tuning curves) that could be employed. It seems like the question that you really want to ask is whether the representation in the last convolutional layer is capable of distinguishing images of varying numerosity. In which case, why not just train a linear probe? Number sense is a cognitive ability, not a property of individual neurons. We don't really care what proportion of units are number selective as long as the network is able to perceive numerosity (which might not require very many units). A larger proportion of number selective units doesn't necessarily imply a better number sense. As such, I question the reliance on the analysis of individual units and would rather see population decoding results. * The motivation for analyzing only the last convolutional layer is not clear. Why would numerosity not appear in earlier layers? * The motivation for using classification rather than regression when training explicitly for numerosity is not well justified. The justification, "numerosity is a raw perception rather than resulting from arithmetic", is not clear. Humans clearly perceive numbers on a scale not as unrelated categories. That the subjective experience of numerosity does not involve arithmetic does not constrain the neural mechanisms that could underly that perception. * No effect sizes are reported for number selectivity. Since you did ANOVAs there should be an eta squared for the main effect of numerosity. How number selective are these units?
* The motivation for analyzing only the last convolutional layer is not clear. Why would numerosity not appear in earlier layers?
NIPS_2018_761
NIPS_2018
Weakness] * How to set the parameter S remains a problem. * Algorithm SMILE is interesting but their theoretical results on its performance is not easy to interpret. * No performance comparison with existing algorithms [Recommendation] I recommend this paper to be evaluated as "a good submission; an accept". Their problem formalization is clear, and SMILE algorithm and its theoretical results are interesting. All their analyses are asymptotically evaluated, so I worry about how large the constant factors are. It would make this manuscript more valuable if how good their algorithms (OOMM & SMILE) would be shown theoretically and empirically compared to other existing algorithms. [Detailed Comments] p.7 Th 3 & Cor 1: C^G and C^B look random variables. If it is true, then they should not be used as parameters of T's order. Maybe the authors want to use their upper bounds shown above instead of them. p.7 Sec. 5 : Write the values of parameter S. [Comments to Authors’ Feedback] Setting parameter S: Asymptotic relation shown in Th 3 is a relation between two functions. It is impossible to estimate S from estimated M for a specific n using such a asymptotic functional relation.
* How to set the parameter S remains a problem.
NIPS_2016_265
NIPS_2016
1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available). 2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading. 3. It is not clear from Section 4.2 how the supervision is injected for the source code caption experiment. While it is over interesting work, for acceptance at least points 1 and 3 of the weaknesses have to be addressed. ==== post author response === The author promised to include the results from 1. in the final For 3. it would be good to state it explicitly in Section section 4.2. I encourage the authors to include the additional results they provided in the rebuttal, e.g. T_r in the final version, as it provides more insight in the approach. Mine and, as far as I can see, the other reviewers concerns have been largely addressed, I thus recommend to accept the paper.
2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading.
NIPS_2018_630
NIPS_2018
- While there is not much related work, I am wondering whether more experimental comparisons would be appropriate, e.g. with min-max networks, or Dugas et al., at least on some dataset where such models can express the desired constraints. - The technical delta from monotonic models (existing) to monotonic and convex/concave seems rather small, but sufficient and valuable, in my opinion. - The explanation of lattice models (S4) is fairly opaque for readers unfamiliar with such models. - The SCNN architecture is pretty much given as-is and is pretty terse; I would appreciate a bit more explanation, comparison to ICNN, and maybe a figure. It is not obvious for me to see that it leads to a convex and monotonic model, so it would be great if the paper would guide the reader a bit more there. Questions: - Lattice models expect the input to be scaled in [0, 1]. If this is done at training time using the min/max from the training set, then some test set samples might be clipped, right? Are the constraints affected in such situations? Does convexity hold? - I know the author's motivation (unlike ICNN) is not to learn easy-to-minimize functions; but would convex lattice models be easy to minimize? - Why is this paper categorized under Fairness/Accountability/Transparency, am I missing something? - The SCNN getting "lucky" on domain pricing is suspicious given your hyperparameter tuning. Are the chosen hyperparameters ever at the end of the searched range? The distance to the next best model is suspiciously large there. Presentation suggestions: - The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned. - "We have found it easier to be confident about applying ceterus paribus convexity;" -- the word "confident" threw me off a little here, as I was not sure if this is about model confidence or human interpretability. I suspect the latter, but some slight rephrasing would be great. - Unless I missed something, unconstrained neural nets are still often the best model on half of the tasks. After thinking about it, this is not surprising. It would be nice to guide the readers toward acknowledging this. - Notation: the x[d] notation is used in eqn 1 before being defined on line 133. - line 176: "corresponds" should be "corresponding" (or alternatively, replace "GAMs, with the" -> "GAMs; the") - line 216: "was not separately run" -> "it was not separately run" - line 217: "a human can summarize the machine learned as": not sure what this means, possibly "a human can summarize what the machine (has) learned as"? or "a human can summarize the machine-learned model as"? Consider rephrasing. - line 274, 279: write out "standard deviation" instead of "std dev" - line 281: write out "diminishing returns" - "Result Scoring" strikes me as a bit too vague for a section heading, it could be perceived to be about your experiment result. Is there a more specific name for this task, maybe "query relevance scoring" or something? === I have read your feedback. Thank you for addressing my observations; moving appendix D to the main seems like a good idea. I am not changing my score.
- The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned.
wRbSdbGyfj
ICLR_2025
1. **Triviality of Convergence Proof**: The theoretical proof for convergence appears trivial. Although the paper claims that $Z$ is non-i.i.d., Assumption 4.1 indicates that $X$ is i.i.d., leading to a clear covariance matrix for $Z$ as $A^\top A / np$. Following Modification 1 in Appendix C, previous theorems can be trivially adapted with straightforward modifications. Thus, the convergence proof lacks substantial novelty and rigor. 2. **Limited Parameterization**: According to Equation 2.2, it seems that only the final logits layer contains parameters $\beta$, while the preceding $S^M X$ lacks parameters. The absence of parameters in these earlier layers raises concerns about why only the last layer is parameterized, which could lead to over-smoothing due to unparameterized iterations of $S X$ and consequently limit the model’s expressiveness. 3. **Basic Transfer Learning Approach**: The transfer learning method employed, a simple $\delta$ fine-tuning, appears overly basic. There is little exploration of alternative, established methods in transfer learning or meta-learning that could potentially enhance the model’s adaptability and robustness. 4. **Issues in Hyperparameter Sensitivity Testing**: The sensitivity experiments on hyperparameters are limited. For instance, in the $\lambda$ experiment, the model fails to achieve the optimal solution seen at $M=5$. Additionally, the range of $\lambda$ tested is narrow; a broader, exponential scale (e.g., 0.01, 0.001, 0.0001) would provide a more comprehensive understanding of the model’s sensitivity. 5. **Lack of Notational Clarity**: The notation lacks clarity and could benefit from a dedicated section outlining all definitions. Many symbols, such as $X_j$, are undefined in Appendix A. A coherent notation guide would improve readability and help readers follow the technical details more effectively.
1. **Triviality of Convergence Proof**: The theoretical proof for convergence appears trivial. Although the paper claims that $Z$ is non-i.i.d., Assumption 4.1 indicates that $X$ is i.i.d., leading to a clear covariance matrix for $Z$ as $A^\top A / np$. Following Modification 1 in Appendix C, previous theorems can be trivially adapted with straightforward modifications. Thus, the convergence proof lacks substantial novelty and rigor.
NIPS_2016_232
NIPS_2016
weakness of the suggested method. 5) The literature contains other improper methods for influence estimation, e.g. 'Discriminative Learning of Infection Models' [WSDM 16], which can probably be modified to handle noisy observations. 6) The authors discuss the misestimation of mu, but as it is the proportion of missing observations - it is not wholly clear how it can be estimated at all. 5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly. 7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms.
5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly.
pO7YD7PADN
EMNLP_2023
1. Limited technical contributions. The compression techniques evaluated are standard existing methods like quantization and distillation. The debiasing baselines are also from prior work. There is little technical innovation. 2. Limited datasets and models. The bias benchmarks only assess gender, race, and religion. Other important biases and datasets are not measured. Also missing are assessments on state-of-the-art generative models like GPT. 3. Writing logic needs improvement. Some parts, like introducing debiasing baselines in the results, make the flow confusing.
2. Limited datasets and models. The bias benchmarks only assess gender, race, and religion. Other important biases and datasets are not measured. Also missing are assessments on state-of-the-art generative models like GPT.
NIPS_2020_342
NIPS_2020
1. The primary motivation for the work is not well supported. Certainly, cities do manage thousands of intersections. While unquantified, it is not clear that the cost of training individually would surpass that of the degradation seen in the multi-env setting. 2. It is stated both that the multi-env model has an inevitable performance loss and that the multi-env model outperforms the single-env model due to knowledge sharing. These two statements seem to be conflicting. Please clarify. 3. In section 5.1, the single-env results, it is not clear that FRAP is only applicable in 37 of the 112 cases. As there is quite a lot of recent work on the single-env TSCP. It would have been better to compare to a less restrictive baseline. Such methods can be found in the following: a. Shabestary, Soheil Mohamad Alizadeh, and Baher Abdulhai. "Deep learning vs. discrete reinforcement learning for adaptive traffic signal control." International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018. b. Ault, James, et al. "Learning an Interpretable Traffic Signal Control Policy." International Conference on Autonomous Agents and MultiAgent Systems. AAMAS, 2020. c. Liang, Xiaoyuan, et al. "Deep reinforcement learning for traffic light control in vehicular networks." IEEE Transactions on Vehicular Technology. IEEE, 2019. 4. In the supplementary material it is stated: “AttendLight in single-env regime outperforms other algorithms in 107 cases out of 112 cases“ and that AttendLight reduces ATT by 10% on average over FRAP. While it is clear how attention is useful in the multi-env setting, could you please add some analysis as to why it is expected to outperform an algorithm designed for single intersections? 5. As it is proposed in the paper that the method is suitable for city-wide control, it is important to provide an analysis of the worst-case results of the method. If on average traffic is alleviated, but certain intersections become nearly impassable this would not be a viable solution. A glance at the numbers in the supplement shows this method may result in some intersections experiencing a 78% increase in average travel time. Please provide such a worse case analysis.
2. It is stated both that the multi-env model has an inevitable performance loss and that the multi-env model outperforms the single-env model due to knowledge sharing. These two statements seem to be conflicting. Please clarify.
oqDoAMYbgA
ICLR_2024
1. The experimental study is limited: the comparisons with other methods are provided only on a single Wiki-small dataset. From that, it’s not enough to judge on the comparison with other baselines. 2. The training time seems to be the main bottleneck of the method, its training is slower than for almost any other tree method (as reported in the paper). Probably because of that, applying the method on bigger datasets becomes infeasible. (Fair to say, that the same shortcoming applies for the original Softmax Tree, and the presented method seems to double the training time). 3. The method seems to be quite sensitive to hyperparameters, so in order to apply it method for a new problem, one has to perform some careful hyperparameter search to find a proper $\alpha$.
3. The method seems to be quite sensitive to hyperparameters, so in order to apply it method for a new problem, one has to perform some careful hyperparameter search to find a proper $\alpha$.
haPIkA8aOk
EMNLP_2023
1. The description of the metrics is limited. it would be desirable to have an explanation of the metrics used in the paper. Or at least a citation to the metrics would have been good. 2. The training objective in Equation 7 would increase the likelihood of negative cases as well resulting in unwanted behavior. Should the objective be: \mathcal{L}_{c} - \mathcal{L}_{w}? 3. The paper needs a bit of polishing as at times equations are clubbed together. The equations in Sections 4 and 5 can be clubbed together while introducing them. 4. The paper motivates by the fact that we need to generate multiple sequences during the test time and progress to get rid of them. However, ASPIRE generates multiple answers during the training phase. This should be explicitly mentioned in the paper as it directly conflicts with the claim of not generating multiple sequences. 5. A bit more analysis on the impact of the number of model parameters is warranted.
1. The description of the metrics is limited. it would be desirable to have an explanation of the metrics used in the paper. Or at least a citation to the metrics would have been good.
30kbnyD9hF
EMNLP_2023
- Lack of reference explaining communication in this context. - The paper introduces four communication modes (debate, report, relay, and memory) without sufficient support from literature, despite existing relevant work in argumentation theory. Section 4.2 provides inadequate details and lacks illustrative examples. - Figure 3 is challenging to understand. The workflow and captions are unclear, and the representation of communication modes on the left side is confusing. - Figure 4's tabular representation of node agent interactions is not intuitive.
- Figure 3 is challenging to understand. The workflow and captions are unclear, and the representation of communication modes on the left side is confusing.
ICLR_2023_2630
ICLR_2023
- The technical novelty and contributions are a bit limited. The overall idea of using a transformer to process time series data is not new, as also acknowledged by the authors. The masked prediction was also used in prior works e.g. MAE (He et al., 2022). The main contribution, in this case, is the data pre-processing approach that was based on the bins. The continuous value embedding (CVE) was also from a prior work (Tipirneni & Reddy 2022), and also the early fusion instead of late fusion (Tipirneni & Reddy, 2022; Zhang et al., 2022). It would be better to clearly clarify the key novelty compared to previous works, especially the contribution (or performance gain) from the data pre-processing scheme. - It is unclear if there are masks applied to all the bins, or only to one bin as shown in Fig. 1. - It is unclear how the static data (age, gender etc.) were encoded to input to the MLP. The time-series data was also not clearly presented. - It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method. - The proposed "masked event dropout scheme" was not clearly presented. Was this dropout applied to the ground truth or the prediction? If it was applied to the prediction or the training input data, will this be considered for the loss function? - The proposed method was only evaluated on EHR data but claimed to be a method designed for "time series data" as in both the title and throughout the paper. Suggest either tone-down the claim or providing justification on more other time series data. - The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods. - Missing reference to the two used EHR datasets at the beginning of Sec. 4.
- It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method.
NIPS_2020_1108
NIPS_2020
- The reported results seem to be partially derivative: extension to hyper-networks of results already presented in the literature for standard networks. - The case with finite width for f and infinite width for g is not discussed: it would have provided a complete treatment of the topic. - Presentation could be improved, first of all by removing typos (see additional comments), and then by providing more background on NTK and GP.
- The reported results seem to be partially derivative: extension to hyper-networks of results already presented in the literature for standard networks.
NIPS_2019_1145
NIPS_2019
The paper has the following main weaknesses: 1. The paper starts with the objective of designing fast label aggregation algorithms for a streaming setting. But it doesn’t spend any time motivating the applications in which such algorithms are needed. All the datasets used in the empirical analysis are static datasets. For the paper to be useful, the problem considered should be well motivated. 2. It appears that the output from the algorithm depends on the order in which the data are processed. This should be clarified. 3. The theoretical results are presented under the assumption that the predictions of FBI converge to the ground truth. Why should this assumption be true? It is not clear to me how this assumption is valid for finite R. This needs to clarified/justified. 3. The takeaways from the empirical analysis are not fully clear. It appears that the big advantage of the proposed methods is their speed. However, the experiments don’t seem to be explicitly making this point (the running times are reported in the appendix; perhaps they should be moved to the main body). Plus, the paper is lacking the key EM benchmark. Also, perhaps the authors should use a different dataset in which speed is most important to showcase the benefits of this approach. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Also, the motivating crowdsourcing application where speed is really important is not completely clear to me from the rebuttal. I suggest the authors clarify this properly in the final paper.
1. The paper starts with the objective of designing fast label aggregation algorithms for a streaming setting. But it doesn’t spend any time motivating the applications in which such algorithms are needed. All the datasets used in the empirical analysis are static datasets. For the paper to be useful, the problem considered should be well motivated.
43SOcneD8W
EMNLP_2023
1. The reported performance gain of the proposed framework is marginal when compared to the improvements introduced by simple Prompt Tuning approaches. For instance,for Table 3, out of 2.7% gain over Roberta backbone on ReTACRED, prompting tuning (i.e. HardPrompt) already achieves the gain of 1.7%. 2. The scope of the study is under-specified. It seems that the work focuses on injecting CoT- based approach to small-scale Language Models. If that is not the case, additional relevant CoT baselines for in-context learning of Large Language Models (for text-003 and ChatGPT) are missing in Table 2 and 3 (See Question A). 3. The major components of the proposed frameworks are CCL and PR. Both of them are incremental over the previous methods with minor adaptation for CoT-based prompting proposal.
2. The scope of the study is under-specified. It seems that the work focuses on injecting CoT- based approach to small-scale Language Models. If that is not the case, additional relevant CoT baselines for in-context learning of Large Language Models (for text-003 and ChatGPT) are missing in Table 2 and 3 (See Question A).
NIPS_2016_117
NIPS_2016
weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers).
- Figure 3 is very hard to read anything on the figure.
NIPS_2018_756
NIPS_2018
It looks complicated to assess the practical impact of the paper. On the one hand, the thermodynamic limit and the Gaussianity assumption may be hard to check in practice and it is not straightforward to extrapolate what happens in the finite dimensional case. The idea of identifying the problem's phase transitions is conceptually clear but it is not explicitly specified in the paper how this can help the practitioner. The paper only compares the AMP approach to alternate least squares without mention, for example, positive results obtained in the spectral method literature. Finally, it is not easy to understand if the obtained results only regard the AMP method or generalize to any inference method. Questions: - Is the analysis restricted to the AMP inference? In other words, could a tensor that is hard to infer via AMP approach be easily identifiable by other methods (or the other way round)? - Are the easy-hard-impossible phases be related with conditions on the rank of the tensor? - In the introduction the authors mention the fact that tensor decomposition is in general harder in the symmetric than in the non-symmetric case. How is this connected with recent findings about the `nice' landscape of the objective function associated with the decomposition of symmetric (orthogonal) order-4 tensors [1]? - The Gaussian assumption looks crucial for the analysis and seems to be guaranteed in the limit r << N. Is this a typical situation in practice? Is always possible to compute the `effective' variance for non-gaussian outputs? Is there a finite-N expansion that characterize the departure from Gaussianity in the non-ideal case? - For the themodynamic limit to hold, should one require N_alpha / N = O(1) for all alpha? - Given an observed tensor, is it possible to determine the particular phase it belongs to? [1] Rong Ge and Tengyu Ma, 2017, On the Optimization Landscape of Tensor Decompositions
- In the introduction the authors mention the fact that tensor decomposition is in general harder in the symmetric than in the non-symmetric case. How is this connected with recent findings about the `nice' landscape of the objective function associated with the decomposition of symmetric (orthogonal) order-4 tensors [1]?
ARR_2022_358_review
ARR_2022
- Some definitions and statements are not clear or well justified. - Lack of clarity in the definition of the input/outputs for each subtask 063-065 Though most of the existing studies consider the expansion a regression problem ... -> Missing a reference to support this statement 081-082 TEAM that performs both the Attach and Merge operations together -> Performs attach and merge together or is trained together? 092 Missing reference for wordnet definition 112-114 "The taxonomy T is arranged in a hierarchical manner directed edges in E as shown in Figure 1." - > It doesn't seem clear. 115 query concept q: Is this a concept or a candidate word? Your initial examples (Page 1) mention "mango" and "nutrient" and do not seem to be concepts according to your definition. 188 the query synset sq -> Is this a query synset or a word that belongs to the synset (concept) X. I am not sure if I understand correctly but in the case of attach, the query concept is a (d, ss): definition, synonymns included in the synset. However, it is not clear to me if it is the same for merge as it seems like the query concept is (d, ss) but ss is just the word that you are removing. 214 and the synset is represented by the pre-trained embedding of the synonym word itself. - > It applies for merge in most cases but it is not case for attach, right? Because attach considers candidate concept (that can be composed by a synonym set) 224 comprising of the node -> that comprises the node ... 245 The GAT is trained with the whole model? Needs to be reviewed by a English native speaker and some sentences need to be rewriting for improving the clarity.
245 The GAT is trained with the whole model? Needs to be reviewed by a English native speaker and some sentences need to be rewriting for improving the clarity.
NIPS_2020_1524
NIPS_2020
* The paper makes several “hand-wavy” arguments, which are suitable for supporting the claims in the paper; but it is unclear if they would generalize for analyzing / developing other algorithms. For instance: 1. Replacing `n^2/(2*s^2)` with an arbitrary parameter `lambda` (lines 119-121) 2. Taking SGD learning rate ~ 0.1 (line 164) — unlike the Adam default value, it is unclear what the justification behind this value is.
1. Replacing `n^2/(2*s^2)` with an arbitrary parameter `lambda` (lines 119-121) 2. Taking SGD learning rate ~ 0.1 (line 164) — unlike the Adam default value, it is unclear what the justification behind this value is.
2z9o8bMQNd
EMNLP_2023
- So difficult to follow the contribution of this paper. And it looks like an incremental engineering paper. The proposed method has been introduced in many papers, such as [1] Joshi, A., Bhat, A., Jain, A., Singh, A., & Modi, A. (2022, July). COGMEN: COntextualized GNN-based Multimodal Emotion Recognition. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 4148-4164). - The related work should be updated with more recent related works. - The experimental section needs some significance tests to further verify the effectiveness of the method put forward in the paper. - For the first time appearing in the text, the full name must be written, and abbreviations must be written in parentheses. When it appears in the abstract, it needs to be written once, and when it appears in the main text, it needs to be repeated again, that is, the full name+parentheses (abbreviations) should appear again. - Error analysis plays a crucial role in evaluating model performance and identifying potential issues. We encourage the authors to conduct error analysis in the paper and provide detailed explanations of the model's performance under different scenarios. Error analysis will aid in guiding subsequent improvements and expansions of the ERC research. - Writing mistakes are common across the overall paper, which could be found in “Typos, Grammar, Style, and Presentation Improvements”.
- Error analysis plays a crucial role in evaluating model performance and identifying potential issues. We encourage the authors to conduct error analysis in the paper and provide detailed explanations of the model's performance under different scenarios. Error analysis will aid in guiding subsequent improvements and expansions of the ERC research.
NIPS_2022_742
NIPS_2022
It seems that the 6dof camera poses of panoramas are required to do the projection. Hence, precisely speaking, the method is not fully self-supervised but requires camera pose ground truth. This is usually accessible, easier compared to the ground truth layout, but may also cause error for the layout projection and thus hurts the overall finetuning performance. The experiment could be stronger to demonstrate the effectiveness of the method from two aspects: 1) a stronger baseline. It seems SSLayout360 is in general outperforming HorizonNet. It would be convincing to show that this method is able to improve powerful backbones. 2) analyze the domain gap. It would be nice to add some discussions about the gap between datasets. Some datasets are closer to each other thus the adaption may not be a big issue. Also, if the method is able to finetune a pre-trained model on synthetic data, then the value of the approach would be much higher.
2) analyze the domain gap. It would be nice to add some discussions about the gap between datasets. Some datasets are closer to each other thus the adaption may not be a big issue. Also, if the method is able to finetune a pre-trained model on synthetic data, then the value of the approach would be much higher.
NIPS_2021_2163
NIPS_2021
Weakness: I have some concerns on identification mechanism based on identity bank. 1) Scalability. As shown in Table 3 (a), the performance is getting worse with growth of the maximum number of identities. It means that the capacity should be preset to some small number (e.g., 10). In real-world scenario, we can have more than 10 objects and most of the time we don't know how many objects we will need to handle in the future. Have the authors thought about how to scale up without compromising performance? 2) Randomness. Identities are randomly assigned one embedding from the identity bank. How the results are robust against this randomness? It would be undesirable for the result to change with each inference. It would be great to have some analysis on this aspect. Overall Evaluation: The paper present a novel approach for multi-object video object segmentation and the proposed method outperfrom previous state-of-the-arts on several benchmarks. Now, I would recommend to accept this paper. I will finalize the score after seeing how authors address my concerns in Weakness. While future works are discussed in Supplementary Materials, I encourage the authors to include more discussions on limitations and societal impacts.
1) Scalability. As shown in Table 3 (a), the performance is getting worse with growth of the maximum number of identities. It means that the capacity should be preset to some small number (e.g., 10). In real-world scenario, we can have more than 10 objects and most of the time we don't know how many objects we will need to handle in the future. Have the authors thought about how to scale up without compromising performance?
EtNebdSBpe
EMNLP_2023
- The paper is hard to read and somewhat difficult to follow. - The motivation is unclear. The authors argue that the LLP setup is relevant for (1) privacy and (2) weak supervision. (1) Privacy: the authors claim that the LLP paradigm is relevant for training on sensitive data as the labels for such datasets are not publicly available. However, the setting proposed in this paper does require gold (and publicly available) labels to formulate the ground truth proportion. If this proportion can be formulated without gold labels, it should be discussed. (2) Weak Supervision: in lines 136-137, the authors mention that the associated label proportions "...provides the weak supervision for training the model". However, weak supervision is a paradigm in which data is automatically labeled with noisy labels using some heuristics and labeling functions. It remains unclear to me in what way this setting is related to the proportion parameter authors use in their work. - The authors claim it to be one of the preliminary works discussing the application of LLP to NLP tasks. However, I don't see anything NLP-specific in their approach. - Not all theoretical groundings seem to be relevant to the main topic (e.g., some of the L_dppl irregularities). Additional clarification of their relevance is needed. - Section 3.3 says the results are provided for binary classifiers only, and the multi-class setting remains for future work. However, one of the datasets used for experiments is multi-label. - The experimental setting is unclear: does Table 1 contain the test results of the best model? If so, how was the best model selected (given that there is no validation set)? Also, if the proposed method is of special relevance to the sensitive data, why not select a sensitive dataset to demonstrate the method's performance on it? Or LLP data? - The authors claim the results to be significant. However, no results of significance testing are provided.
- The authors claim it to be one of the preliminary works discussing the application of LLP to NLP tasks. However, I don't see anything NLP-specific in their approach.
NIPS_2020_1519
NIPS_2020
- The proposed gradient unrolling method requires N steps to unrolling the gradient, which is slow and perhaps difficult to scale up to learning large and complicated EBLVMs. Although corollary 3 indicates that the estimation accuracy can be asymptoticly arbitrarily small, that requires N to be sufficiently large that may exceed the computing limit. - For comparison, at least one NCE-based method should be included. [1] shows that with a strong noise distribution, this line of work is possible to learn EBM on natural images. - In Table 2, it seems that higher dimension of h leads to worse result. Possible reason needs to be discussed. - Figure 4 shows that the learning can be unstable. [1] Flow Contrastive Estimation of Energy-Based Models
- For comparison, at least one NCE-based method should be included. [1] shows that with a strong noise distribution, this line of work is possible to learn EBM on natural images.
Akk5ep2gQx
EMNLP_2023
1. The experiment section could be improved. For example, it is better to carry significance test on the human evaluation results. It is also beneficial to compare the proposed method with some most recent LLM. 2. The classifier of determining attributes using only parts of the sentence may not perform well. Specifically, I am wondering what is the performance of the attribute classifer obtained using Eq.2 and Eq.7. 3. Some of the experiment results could be explained in more details. For example, the author observes that "Compared to CTRL, DASC has lower Sensibleness but higher Interestingness", but why? Is that because DASC is bad for exhibiting Sensibleness? Similar results are also observed in Table1.
1. The experiment section could be improved. For example, it is better to carry significance test on the human evaluation results. It is also beneficial to compare the proposed method with some most recent LLM.
ICLR_2021_2802
ICLR_2021
of this paper include the following aspects: 1. This paper is not well written and some parts are hard to follow. It lacks necessary logical transition and important figures. For example, it lacks explanations to support the connection between the proposed training objective and the Cross Margin Discrepancy. Also, it should at least contain one figure to explain the overall architecture or training pipeline. 2. The authors claim that there is still no research focusing on the joint error for UDA. But this problem of arbitrarily increased joint error has already been studied in previous works like “Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment”, in ICML2019. The authors should discuss on that work and directly illustrate the relationship between that work and the proposed one, and why the proposed method is better. 3. Although the joint error is indeed included in the proposed upper bound, in practice the authors have to use Source-driven Hypothesis Space and Target-driven Hypothesis Space to obtain approximation of f_{S} and f_{T}. To me, in practice the use of three classifiers h, f_{1}, f_{2} is just like an improvement over MCD. Hence, I doubt whether the proposed method can still simultaneously minimize the domain discrepancy and the joint error. For example, as shown in the Digit experiments, the performance is highly sensitive to the choice of \gamma in SHS, and sometimes the optimal \gamma value is conflicting for different domains in the same dataset, which is strange since according to the paper’s theorem, smaller \gamma only means more relaxed constraint on hypothesis space. Also, as shown in the VisDA experiments, the optimal value of \eta is close to 1, which means classification error from the approximate target domain is basically useless. 4. The benchmark results are inferior to the state-of-the-art methods. For instance, the Contrastive Adaptation Network achieves an average of 87.2 on VisDA2017, which is much higher than 79.7 achieved by the proposed method. And the same goes with Digit, Office31, and Office-Home dataset.
2. The authors claim that there is still no research focusing on the joint error for UDA. But this problem of arbitrarily increased joint error has already been studied in previous works like “Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment”, in ICML2019. The authors should discuss on that work and directly illustrate the relationship between that work and the proposed one, and why the proposed method is better.
ICLR_2023_1093
ICLR_2023
1. I thought the novelty is questionable. The authors claimed that the proposed Uni-Mol is the first pure 3D molecular pretraining framework. However, there have been already a few similar works. For example, a. The Graph Multi-View Pre-training (GraphMVP) framework leverages the correspondence and consistency between 2D topological structures and 3D geometric views. Liu et al., Pre-training Molecular Graph Representation with 3D Geometry, ICLR 2021. b. The geometry-enhanced molecular representation learning method (GEM) proposes includes several dedicated geometry-level self-supervised learning strategies to learn molecular geometry knowledge. Fang et al., Geometry-enhanced molecular representation learning for property prediction, nature machine intelligence, 2022. c. Guo et al. proposed a self-supervised pre-training model for learning structure embeddings from protein 3D structures. Guo et al., Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures, AAAI 2022. d. The GeomEtry-Aware Relational Graph Neural Network (GearNet) framework uses type prediction, distance prediction and angle prediction of masked parts for pretaining. Zhang et al., Protein Representation Learning by Geometric Structure Pretraining, ICML 2022 workshop. 2. The comparison with the SOTA methods may be unfair. The performance of the paper is based on the newly collected 209M dataset. However, the existing methods use smaller datasets. For example, GEM employs only 20M unlabeled data. Because the scale of datasets has a significant impact on the accuracy, the superior of the proposed method may be from the new large-scale datasets. 3. The authors claimed one of the contributions is that the proposed Uni-Mol contains a simple and efficient SE(3)-equivariant Transformer backbone. However, I thought this contribution is too weak. 4. The improvement is not very impressive or convincing. Although with a larger dataset for pretraining, the improvement is a bit limited, e.g., in Table 1. 5. It is not clear which part causes the main improvement: Transformer, pretraining or the larger dataset? 6. It could be better to show the 3D position recovery and masked atom prediction accuracy and visualize the results. 7. The visualization of the self-attention map and pair distance map in Appendix H is interesting. However, according to the visualization, the self-attention map is very similar to the pair distance map, as the author explained. In this case, why not directly use pair distance as attention? Or what does self-attention actually learn besides distance in the task? As self-attention is computationally expensive, is it really needed?
2. The comparison with the SOTA methods may be unfair. The performance of the paper is based on the newly collected 209M dataset. However, the existing methods use smaller datasets. For example, GEM employs only 20M unlabeled data. Because the scale of datasets has a significant impact on the accuracy, the superior of the proposed method may be from the new large-scale datasets.
rs78DlnUB8
EMNLP_2023
1. The paper lacks a clear motivation for considering text graphs. Except for the choice of complexity indices, which can easily be changed according to the domain, the proposed method is general and can be applied to other graphs or even other types of data. Moreover, formal formulations of text graphs and the research question are missing in the paper. 2. Several curriculum learning methods have been discussed in Section 1. However, the need for designing a new curriculum learning method for text graphs is not justified. The research gap, e.g., why existing methods can’t be applied, is not discussed. 3. Equations 7-11 provide several choices of function f. However, there is no theoretical analysis or empirical experiments to advise on the choice of function f. 4. In the overall performance comparison (Table 2), other curriculum learning methods do not improve performance compared to No-CL. These results are not consistent with the results reported in the papers of the competitors. At least some discussions about the reason should be included. In addition, it is unclear how many independent runs have been conducted to get the accuracy and F1. What are the standard deviations? 5. Although experimental results in Table 3 and Table 4 show that the performance remains unchanged, it is unclear how the transfer of knowledge is done in the proposed method. An in-depth discussion of this property should strengthen the soundness of the paper. 6. In line 118, the authors said the learned curricula are model-dependent, but they also said the curricula are transferrable across models. These two statements seem to be contradictory.
2. Several curriculum learning methods have been discussed in Section 1. However, the need for designing a new curriculum learning method for text graphs is not justified. The research gap, e.g., why existing methods can’t be applied, is not discussed.
ICLR_2021_1181
ICLR_2021
1.For domain adaptation in the NLP field, powerful pre-trained language models, e.g., BERT, XLNet, can overcome the domain-shift problem to some extent. Thus, the authors should be used as the base encoder for all methods and then compare the efficacy of the transfer parts instead of the simplest n-gram features. 2.The whole procedure is slightly complex. The author formulates the prototypical distribution as a GMM, which has high algorithm complexity. However, formal complexity analysis is absent. The author should provide an analysis of the time complexity and training time of the proposed SAUM method compared with other baselines. Besides, a statistically significant test is absent for performance improvements. 3.The motivation of learning a large margin between different classes is exactly discriminative learning, which is not novel when combined with domain adaptation methods and already proposed in the existing literature, e.g., Unified Deep Supervised Domain Adaptation and Generalization, Saeid et al., ICCV 2017. Contrastive Adaptation Network for Unsupervised Domain Adaptation, Kang et al., CVPR 2019 Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation, Chen et al., AAAI 2019. However, this paper lacks detailed discussions and comparisons with existing discriminative feature learning methods for domain adaptation. 4.The unlabeled data (2000) from the preprocessed Amazon review dataset (Blitzer version) is perfectly balanced, which is impractical in real-world applications. Since we cannot control the label distribution of unlabeled data during training, the author should also use a more convinced setting as did in Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018, which directly samples unlabeled data from millions of reviews. 5.The paper lacks some related work about cross-domain sentiment analysis, e.g., End-to-end adversarial memory network for cross-domain sentiment classification, Li et al., IJCAI 2017 Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018 Hierarchical attention transfer network for cross-domain sentiment classification, Li et al., AAAI 18 Questions: 1.Have the authors conducted the significance tests for the improvements? 2.How fast does this algorithm run or train compared with other baselines?
1.For domain adaptation in the NLP field, powerful pre-trained language models, e.g., BERT, XLNet, can overcome the domain-shift problem to some extent. Thus, the authors should be used as the base encoder for all methods and then compare the efficacy of the transfer parts instead of the simplest n-gram features.
zWGDn1AmRH
EMNLP_2023
1.This paper is challenging to follow, and the proposed method is highly complex, making it difficult to reproduce. 2.The proposed method comprises several complicated modules and has more parameters than the baselines. It remains unclear whether the main performance gain originates from a particular module or if the improvement is merely due to having more parameters. The current version of the ablation study does not provide definitive answers to these questions. 3.The authors claim that one of their main contributions is the use of a Mahalanobis contrastive learning method to narrow the distribution gap between retrieved examples and current samples. However, there are no experiments to verify whether Mahalanobis yields better results than standard contrastive learning. 4.The proposed method involves multiple modules, which could impact training and inference speed. There should be experiments conducted to study and analyze these effects.
2.The proposed method comprises several complicated modules and has more parameters than the baselines. It remains unclear whether the main performance gain originates from a particular module or if the improvement is merely due to having more parameters. The current version of the ablation study does not provide definitive answers to these questions.
NIPS_2016_537
NIPS_2016
weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
NIPS_2017_130
NIPS_2017
weakness)? 4.] Can the authors discuss the sensitivity of any fixed tuning parameters in the model (both strengths and weakness)? 5.] What is the scalability of the model proposed and computational complexity? Will the authors be making the code publicly available with the data? Are all results reproducible using the code and data? 6.] What conclusion should a user learn and drawn? The applications section was a bit disappointing given the motivation of the paper. A longer discussion is important to the impact and success of this paper. Please discuss.
4.] Can the authors discuss the sensitivity of any fixed tuning parameters in the model (both strengths and weakness)?
NIPS_2019_1411
NIPS_2019
] *assumption* - I am not sure if it is safe to assume any programmatic policy can be parameterized by a vector \theta and is differentiable in \theta. (for Theorem 4.2) *initial policy* - In all the experiments (TORCS, MountainCar, and Pendulum), the IPPG polices improve upon the PRIOR. It is not clear if IPPG can learn from scratch. Showing the performance of IPPG learning from scratch would be important to verify this. - Can IPPG be initialized with a neural policy? It seems that it is possible based on Algorithm 1. If so, it would be interesting to see how well IPPG work using a neural policy learned with DDPG instead of PRIOR. Can IIPG improve upon DDPG? *experiment setup* - It is mentioned that "both NDPS and VIPER rely on imitating a fixed neural policy oracle" (L244). What is this policy oracle? Is this the policy learned using DDPG shown in the tables? If not, what's the performance of using NDPS and VIPER to distill the DDPG policies? - It would be interesting to see if the proposed framework works with different policy gradient approaches. *experiment results* - How many random seeds are used for learning the policies (DDPO and IPPG)? - What are the standard deviation or confidence intervals for all performance values? Are all the tracks deterministic? Are the DDPG policies deterministic during testing? - It would be better if the authors provided some videos showing different policies controlling cars on different tracks so that we can have a better idea of how different methods work. *reproducibility* - Some implementation details are lacking from the main paper, which makes reproducing the results difficult. It is not clear to me what policy gradient approach is used. - The provided dropbox link leads to an empty folder (I checked it on July 5th). *related work* - I believe it would be better if some prior works [1-5] exploring learning-based program synthesis frameworks were mentioned in the paper. *reference* [1] "Neuro-symbolic program synthesis" in ICLR 2017 [2] "Robustfill: Neural program learning under noisy I/O" in ICML 2017 [3] "Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis" in ICLR 2018 [4] "Neural program synthesis from diverse demonstration videos" in ICML 2018 [5] "Execution-Guided Neural Program Synthesis" in ICLR 2019 ----- final review ----- After reading the other reviews and the author response, I have mixed feelings about this paper. On one hand, I do recognize the importance of this problem and appreciate the proposed framework (IPPG). On the other hand, many of my concerns (e.g. the choices of initial policy, experiment setup, and experiment results) are not addressed, which makes me worried about the empirical performance of the proposed framework. To be more specific, I believe the following questions are important for understanding the performance of IPPG, which remain unanswered: (1) Can IPPG learn from scratch (i.e. where no neural policy could solve the task that we are interested in)? The authors stated that "IPPG can be initialized with a neural policy, learned for example via DDPG, and thus can be made to learn" in the rebuttal, which does not answer my question, but it is probably because my original question was confusing. (2) Can IPPG be initialized with a neural policy? If so, can IPPG be initialized with a policy learned using DDPG and improve it? As DDPG achieves great performance on different tracks, I am just interested in if IPPG can even improve it. (3) How many random seeds are used for learning the policies (DDPO and IPPG)? What are the standard deviation or confidence intervals for all performance values? I believe this is important for understanding the performance of RL algorithms. (4) What is the oracle policy that NDPS and VIPER learn from? If they do not learn from the DDPG policy, what is the performance if they distill the DDPG policy. (5) Can IPPG learn from a TPRO/PPO policy? While the authors mentioned that TRPO and PPO can't solve TORCS tasks, I believe this can be verified using the CartPole or other simpler environment. In sum, I decided to keep my score as 5. I am ok if this paper gets accepted (which is likely to happen given positive reviews from other reviewers) but I do hope this paper gets improved from the above points. Also, it would be good to discuss learning-based program synthesis frameworks as they are highly-related.
- It would be interesting to see if the proposed framework works with different policy gradient approaches. *experiment results* - How many random seeds are used for learning the policies (DDPO and IPPG)?
mERmlOPxPY
EMNLP_2023
- W1) The paper evaluates only on one dataset and on one task. Results and conclusions would be stronger if the analysis were applied to more datasets and more tasks. - W2) Similarly, only one LLM model (GPT-3) is examined. - W3) Some terms like "co-prediction" (line 278) and "in-context" (line 285) are not defined or explained. - W4) A potential weakness is that the paper overlooks the presence of multi-label tweets. For instance, do multi-label tweets impact (i.e., harm) the effectiveness of the EG approach? I imagine that multi-label tweets would confuse the model insofar as the model would have a harder time generating a clear definition of a single label concept. See Q5 below.
-W1) The paper evaluates only on one dataset and on one task. Results and conclusions would be stronger if the analysis were applied to more datasets and more tasks.
VmqTuFMk68
ICLR_2024
1. Writtings could be improved in some places. For two examples, * In definition 2.1, what are the "relevant" auxiliary model weights? The current definition is a bit difficult for me to interpret. * In definition 2.3, are $p_t$'s referring to positional embedding? Could you explain why there aren't positional embeddings in definition 2.10. 2. Theorem 2.5 shows linear attention could be approximated by softmax attention. Can softmax attention also be approximated by linear attention? If not, I feel Theorem 2.5 alone does not suffice to justify the claim that "Thus, we often use linear attention in TINT". Let me know if I have misunderstood anything. In addition, is the claimed parameter saving based on linear attention or self-attention? 3. Definition 2.8 uses finite difference to approximate gradient. I am wondering if we can do this from end to end. That is, can we simulate a backward pass by doing finite-difference and two forward-pass? What's the disadvantage of doing so? 4. This work provides experiments on language tasks, while prior works provide experiments on simulated tasks (e.g., Akyurek et al 2022 did ICL for linear regression). So the empirical results are not directly comparable with prior works. 5. I feel an important prior work [1] is missed. Specifically, [1] also did approximation theory for ICL using transformers. How would the required number of parameters in the construction in this work compare to theirs? [1] Bai, Yu, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. "Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection." NeurIPS 2023
1. Writtings could be improved in some places. For two examples, * In definition 2.1, what are the "relevant" auxiliary model weights? The current definition is a bit difficult for me to interpret.
7tpMhoPXrL
ICLR_2025
• GDPR Compliance Concerns: The paper’s reliance on approximate unlearning without theoretical guarantees presents a significant shortfall. While approximate unlearning may be practical, it falls short in scenarios where data privacy and regulatory compliance are non-negotiable. Without provable guarantees, it is questionable whether this method can satisfy GDPR requirements for data erasure. This gap undermines the core purpose of Model Unlearning in privacy-centered contexts, where the "right to be forgotten" demands more than a probabilistic assurance. • Scalability to Other Domains: The Forget Vector approach is developed and validated primarily for image classification tasks, potentially limiting its application in NLP or other non-visual domains where input perturbations may be less effective. • Dependence on MIA (Membership Inference Attack) Testing via Ulira: While the paper uses MIA testing as a metric for unlearning effectiveness, the effectiveness of MIA testing itself is not sufficiently robust for privacy guarantees. Additionally the use of U-LiRA [1] is recommended. • Sensitivity to Data Shifts: From the paper the effectiveness of unlearning decreases under certain data shifts, which may hinder the reliability of Forget Vectors in dynamic data environments or adversarial settings.
• Dependence on MIA (Membership Inference Attack) Testing via Ulira: While the paper uses MIA testing as a metric for unlearning effectiveness, the effectiveness of MIA testing itself is not sufficiently robust for privacy guarantees. Additionally the use of U-LiRA [1] is recommended.
NIPS_2018_197
NIPS_2018
weakness of the paper: its clarity. From the presentation, it seems evident that the author is an expert in the field of computer algebra/algebraic geometry. It is my assumption that most members of the NIPS community will not have a strong background on this subject, me including. As a consequence, I found it very hard to follow Sect. 3. My impression was that the closer the manuscript comes to the core of algebraic geometry results, the less background was provided. In particular, I would have loved to see at least a proof idea or some more details/background on Thm. 3.1 and Cor. 3.2. Or maybe, the author could include one less example in the main text but show the entire derivation how to get from one concrete instance of A to right kernel B by manual computation? Also, for me the description in Sect. 2.4 was insufficient. As a constructive instruction, maybe drop one of the examples (R(del_t) / R[sigma_x]), but give some more background on the other? This problem of insufficient clarity cannot be explained by different backgrounds alone. In Sect. 3.2, the sentence "They are implemented in various computer algebra systems, 174 e.g., Singular [8] and Macaulay2 [16] are two well-known open source systems." appears twice (and also needs grammar checking). If the author could find a minimal non-trivial example (to me, this would be an example not including the previously considered linear differential operator examples) for which the author can show the entire computation in Sect. 3.2 or maybe show pseudo-code for some algorithms involving the Groebner basis, this would probably go a long way in the community. That being said, the paper's strengths are (to the best of this reviewer's knowledge) its originality and potential significance. The insight that Groebner bases can be used as a rich language to encode algebraic constraints and highlighting the connection to this vast background theory opens an entirely new approach in modelling capacities for Gaussian processes. I can easily imagine this work being the foundation for many physical/empirical-hybrid models in many engineering applications. I fully agree and applaud the rationale in lines 43-54! Crucially, the significance of this work will depend on whether this view will be adopted fast enough by the rest of the community which in turn depends on the clarity of the presentation. In conclusion: if I understood the paper correctly, I think the theory presented therein is highly original and significant, but in my opinion, the clarity should be improved significantly before acceptance, if this work should reach its full potential. However, if other reviewers have a different opinion on the level of necessary background material, I would even consider this work for oral presentation. Minor suggestions for improvements: - In line 75, the author writes that the "mean function is used as regression model" and this is how the author uses GPs throughout. However, in practice the (posterior) covariance is also considered as "measure of uncertainty". It would be insightful, if the author could find a way to visualize this for one or two of the examples the author considers, e.g., by drawing from the posterior process. - I am not familiar with the literature: all the considerations in this paper should also be applicable to kernel (ridge) regression, no? Maybe this could also be presented in the 'language of kernel interpolation/smoothing' as well? - I am uncertain about the author's reasoning on line 103. Does the author want to express that the mean is a sample from the GP? But the mean is not a sample from the GP with probability 1. Generally, there seems to be some inconsistency with the (algebraic) GP object and samples from said object. - The comment on line 158 "This did not lead to practical problems, yet." is very ominous. Would we even expect any problem? If not, I would argue you can drop it entirely. - I am not sure whether I understood Fig. 2 correctly. Am I correct that u(t) is either given by data or as one draw from the GP and then, x(t) is the corresponding resulting state function for this specified u? I'm assuming that Fig. 3 is done the other way around, right? --- Post-rebuttal update: Thank you for your rebuttal. I think that adding computer-algebra code sounds like a good idea. Maybe presenting the work more in the context of kernel ridge regression would eliminate the discussion about interpreting the uncertainty. Alternatively, if the author opts to present it as GP, maybe a video could be used to represent the uncertainty by sampling a random walk through the distribution. Finally, it might help to not use differential equations as expository material. I assume the author's rationale for using this was that reader might already a bit familiar with it and thus help its understanding. I agree, but for me it made it harder to understand the generality with respect to Groebner bases. My first intuition was that "this has been done". Maybe make they Weyl algebra and Figure 4 the basic piece? But I expect this suggestion to have high variance.
- I am not familiar with the literature: all the considerations in this paper should also be applicable to kernel (ridge) regression, no? Maybe this could also be presented in the 'language of kernel interpolation/smoothing' as well?
ICLR_2023_1500
ICLR_2023
1. A mathematical formulation for the entire problem is missed. Though the problem is complex and difficult to be solved with an end-to-end framework, the original formulation is still needed at the beginning of the Methods section, followed by a brief introduction of the entire framework, i.e. how to split the entire task into several components and what specific role does each component play. I also suggest authors present a figure to illustrate the overall computing procedure of ModelAngelo. 2. Quantitive evaluation results in Figure 3 only reflect middle outputs rather than the final outputs. Figure 4 illustrates the comparison of final results with a single data sample. Thereby, current evaluations are not convincing enough to confirm ModelAngelo’s superiority to competitors. Is it possible for a quantitive comparison on the final outputs? 3. Many losses (e.g. (RMSD) loss, backbone RMSD loss, amino-acid classification loss, local confidence score loss, torsion angles loss, and full atom loss) are involved in the learning of GNN. What are the definition of these losses? How do you get the ground truth labels required (if so) for these losses?
2. Quantitive evaluation results in Figure 3 only reflect middle outputs rather than the final outputs. Figure 4 illustrates the comparison of final results with a single data sample. Thereby, current evaluations are not convincing enough to confirm ModelAngelo’s superiority to competitors. Is it possible for a quantitive comparison on the final outputs?
NIPS_2017_28
NIPS_2017
- Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values. - Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations. - Model implementation should be better justified: for example, the stopping rule with n consecutive identical samples seems a bit arbitrary (at least it's hard to imagine neural/behavioral parallels for that) and sensitivity with regard to n is not discussed. - Finally it's unclear how perceptual modifications apply for the case of recall: in my understanding the items are freely recalled from memory and hence can't be perceptually modified. Also what are speeded/unspeeded conditions?
- Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values.
FGBEoz9WzI
EMNLP_2023
1. Some claims may be inspired from existing studies; thus, it is critical to add the supportive references. For example, Lines 55-64: "we identify four critical factors that affect the performance of chain-of-thought prompting and require large human effort to deal with: (1) order sensitivity: the order combination of the exemplars; (2) complexity: the number of reasoning steps of the rationale chains; (3) diversity: the combination of different complex-level exemplars; (4) style sensitivity: the writing/linguistic style of the rationale chains." --- Most of the above factors have been discussed in existing studies. 2. This approach requires extensive queries to optimize and organize the demonstration exemplars, which would costly behind the paywalls. It also relies on a training-based pipeline, which further increases the complexity of the whole framework.
1. Some claims may be inspired from existing studies; thus, it is critical to add the supportive references. For example, Lines 55-64: "we identify four critical factors that affect the performance of chain-of-thought prompting and require large human effort to deal with: (1) order sensitivity: the order combination of the exemplars; (2) complexity: the number of reasoning steps of the rationale chains; (3) diversity: the combination of different complex-level exemplars; (4) style sensitivity: the writing/linguistic style of the rationale chains." --- Most of the above factors have been discussed in existing studies.
NIPS_2016_192
NIPS_2016
Weakness: (e.g., why I am recommending poster, and not oral) - Impact: This paper makes it easier to train models using learning to search, but it doesn't really advance state-of-the-art in terms of the kind of models we can build. - Impact: This paper could be improved by explicitly showing the settings for the various knobs of this algorithm to mimic prior work: Dagger, searn, etc...it would help the community by providing a single review of the various advances in this area. - (Minor issue) What's up with Figure 3? "OAA" is never referenced in the body text. It looks like there's more content in the appendix that is missing here, or the caption is out of date.
- Impact: This paper could be improved by explicitly showing the settings for the various knobs of this algorithm to mimic prior work: Dagger, searn, etc...it would help the community by providing a single review of the various advances in this area.
NIPS_2018_612
NIPS_2018
Weakness: - Two types of methods are mixed into a single package (CatBoost) and evaluation experiments, and the contribution of each trick would be a bit unclear. In particular, it would be unclear whether CatBoost is basically for categorical data or it would also work with the numerical data only. - The bias under discussion is basically the ones occurred at each step, and their impact to the total ensemble is unclear. For example, randomization as seen in Friedman's stochastic gradient boosting can work for debiasing/stabilizing this type of overfitting biases. - The examples of Theorem 1 and the biases of TS are too specific and it is not convincing how these statement can be practical issues in general. Comment: - The main unclear point to me is whether CatBoost is mainly for categorical features or not. If the section 3 and 4 are independent, then it would be informative to separately evaluate the contribution of each trick. - Another unclear point is the paper presents specific examples of biases of target statistics (section 3.2) and prediction shift of gradient values (Theorem 1), and we can know that the bias can happen, but on the other hand, we are not sure how general these situations are. - One important thing I'm also interested in is that the latter bias 'prediction shift' is caused at each step, and its effect on the entire ensemble is not clear. For example, I guess the effect of the presented 'ordered boosting' could be related to Friedman's stochastic gradient boosting cited as [13]. This simple trick is just apply bagging to each gradient-computing step of gradient boosting, which would randomly perturb the exact computation of gradient. Each step would be just randomly biased, but the entire ensemble would be expected to be stabilized as a whole. Both XGBoost and LightGBM have this stochastic/bagging option, we can use it when we need it. Comment After Author Response: Thank you for the response. I appreciate the great engineering effort to realize a nice & high-performance implementations of CatBoost. But I'm still not sure that how 'ordering boosting', one of two main ideas of the paper, gives the performance improvement in general. As I mentioned in the previous comment, the bias occurs at each base learner h_t. But it is unclear that how this affects the entire ensemble F_t that we actually use. Since each h_t is a "weak" learner anyway, any small biases can be corrected to some extent through the entire boosting process. I couldn't find any comments for this point in the response. I understand the nice empirical results of Tab. 3 (Ordered vs. Plain gradient values) and Tab. 4 (Ordered TS vs. alternative TS methods). But I'm still unsure whether this improvement comes only from the 'ordering' ideas to address two types of target leakages. Because the comparing models have many different hyper parameters and (some of?) these are tuned by Hyperopt, so the improvement can come not only from addressing the two types of leakage. For example, it would be nice to have something like the following comparisons o focus only on two ideas of ordered TS and ordered boosting in addition: 1) Hyperopt-best-tuned comparisons of CatBoost (plain) vs LightGBM vs XGboost (to make sure no advantages exists for CatBoost (plain) ) 2) Hyperopt-best-tuned comparisons of CatBoost without column sampling + row sampling vs LightGBM/XGBoost without column sampling + row sampling 3) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered TS without ordered boosting vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off) 4) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered boosting without ordered TS vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off)
- Another unclear point is the paper presents specific examples of biases of target statistics (section 3.2) and prediction shift of gradient values (Theorem 1), and we can know that the bias can happen, but on the other hand, we are not sure how general these situations are.
README.md exists but content is empty.
Downloads last month
32