paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 192
10.2k
| point
stringlengths 23
618
|
---|---|---|---|
NIPS_2019_772 | NIPS_2019 | of this approach (e.g., it does not take into account language compositionally). I appreciate that the authors used different methods to extract influential objects: Human attention (in line with previous works), text explanation (to rely on another modality), and question parsing (to remove the need of extra annotation). As a complementary analysis, I would have compared object sets (Jaccard Distance) which are extracted with visual cues and text description. Indeed, the VQA-X dataset contains both information for each question/answer pairs. The method is more or less correctly explained. The training details seems complete and allow for reproducibility. The authors do not provide code source although they mentioned it in the reproducibility checklist. The empirical results are quite convincing and the necessary baselines and ablation studies are correctly provided. The formatting is simple and clear! It would have been perfect to provide the error bar as the number of experiments remains low (and over a small number of epochs) The cherry on the cake would be to run similar experiments on VQAv1 / VQA-CP1? To increase the impact of the paper, I would recommend extending the setting to either dense image captioning, or question answering (if possible). I feel that the discussion section raise some excellent points: - I really like table 4, that clearly show that the method perform as expected (I would have add HINT for being exhaustive) - the ablation study is convincing But, a lot of open-questions are still left open and could have been discussed. For instance, I would have appreciated a more in-depth analysis of model errors. What about the model complexity? Why only reweighting L_{crit}. How does evolve L_crit and L_infl at training time? On a more general note, I think the overall writing and paper architecture can be greatly improved. For instance, - the introduction and related work can be partially merged and summarized. - 4.2 starts by providing high-level intuition while 4.1 does not. - Training details incorporate some result discussion Generic questions (sorted by impact): - What is the impact of |I|, do you have the performance ration according to the number of top |I| influential objects - Eq1 is a modified version of GardCAM, however, the modifications are not highlighted (neither explained). For instance, why did the authors remove the ReLU - Even if the weight sensitivity in equation 5 is well motivated, it is not supported by previous works. Thus, did you perform an ablation study? It would be very have been nice in the discussion section. - What is the actual computation cost of the two losses? What is the relative additional time required? +5%, +20%, +200%? - As you used heuristics to retrieve influential objects, did you try to estimate the impact of false negatives in the loss. - How did you pick 0.6 for glove embedding similarity? Did you perform k-cross-validation? What is the potential impact - Have you tried other influential loss (Eq3)? For instance, replacing the min with a mean or NDCG? Remarks: - I would use a different notation for SV(.,.,.) as it is not symmetric. For instance SV_{a}(v_i || v_j) would avoid confusion (I am using KL notation here) - Non-formal expression should be avoided: Ex: "l32 What's worse" - The references section is full of format inconsistencies. Besides, some papers are published with proceeding but are referred to arxiv papers. - 3.1 introduces non-important notation, e.g., function h(.) or f(.) that are never used in the paper. - Several subsections could be gathered together, or define as a paragraph: 2.1/2.2/2.3 ; 5.1/5.2/5.3, etc. It would have save space for more experiments Conclusion: The paper introduces two losses to better tie influential objects and potential answers. The method is convincing, and the experimental results are diverse and good. However, I still think that the paper requires further polishing to improve the readability. I would also advocate providing more element to support the proposed method and to analyze the strengths and weaknesses. Although the current experiences are quite convincing, I would advocate adding more analysis to definitely conclude the efficiency of the method. ---------------------------------- The rebuttal was clearly written and insightful. it answered most of my questions, and the authors demonstrate their ability to update the paper accordingly. Therefore, I am happy to increase my score, and accept the paper | - I would use a different notation for SV(.,.,.) as it is not symmetric. For instance SV_{a}(v_i || v_j) would avoid confusion (I am using KL notation here) - Non-formal expression should be avoided: Ex: "l32 What's worse" - The references section is full of format inconsistencies. Besides, some papers are published with proceeding but are referred to arxiv papers. |
ICLR_2022_497 | ICLR_2022 | I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out.
Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and therefore, the CVAE essentially attempts to approximate the true distribution P. In such a sense, if the true distribution P is independent of the context (which is the case in the experiments in this paper), I do not see the rationale for having the scenarios conditioned on the context, which in theory does not provide any statistical evidence. Therefore, the rationale behind CVAE-SIP is not clear to me. If the goal is not to approximate P but to solve the optimization problem, then having the objective values involved as a predicting goal is reasonable; in this case, having the context involved is justified because they can have an impact on the optimization results. Thus, CVAE-SIPA to me is a valid method. - While reducing the scenarios from 200 to 10 is promising, the quality of optimization has decreased a little bit. On the other hand, in Figure 2, using K-medoids with K=20 can perfectly recover the original value, which suggests that K-medoids is a decent solution and complex learning methods are not necessary for the considered settings. In addition, I am also wondering the performance under the setting that the 200 scenarios (or random scenarios of a certain number from the true distributions) are directly used as the input of CPLEX. In addition, to justify the performance, it is necessary to provide information about robustness as well as to identify the case where simple methods are not satisfactory (such as larger graphs).
Minor concerns: - Given the structure of the proposed CVAE, the generation process takes the input of z and c where z
is derived from w
. This suggests that the proposed method requires us to know a collection of scenarios from the true distribution. If this is the case, it would be better to have a clear problem statement in Sec 3. Based on such understanding, I am wondering about the process of generating scenarios used for getting K representatives - it would be great if codes like Alg 1 was provided. - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). - The structure of the encoder is not clear to me. The notation q_{\phi} is used to denote two different functions q(z w,D) and q ( c , D )
. Does that mean they are the same network? - It would be better to experimentally justify the choice of the dimension of c and z. - It looks to me that the proposed methods are designed for graph-based problems, while two-stage integer programming does not have to be graph problems in general. If this is the case, it would be better to clearly indicate the scope of the considered problem. Before reaching Sec 4.2, I was thinking that the paper could address general settings. - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization. | - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. |
m50eKHCttz | ICLR_2024 | Overall, I think the paper is quite comprehensive. A few points that may be lacking:
1. The results in studying properties of student models is a bit surprising to me. This isn’t a huge weakness, but more exploration of why CNN student models improve with scale and why transformer student models seem to worsen with would strengthen these results.
2. The data partitioning heuristic is reasonable, but some ablations on this approach would be more enlightening. Perhaps in some instances, the student model may be overconfident about particular data points (that either have incorrect labels or are inherently difficult examples to classify), and this data partitioning approach would maintain this overconfidence. | 1. The results in studying properties of student models is a bit surprising to me. This isn’t a huge weakness, but more exploration of why CNN student models improve with scale and why transformer student models seem to worsen with would strengthen these results. |
NIPS_2017_631 | NIPS_2017 | - I don't understand why Section 2.1 is included. Batch Normalization is a general technique as is the proposed Conditional Batch Normalization (CBN). The description of the proposed methodology seems independent of the choice of model and the time spent describing the ResNet architecture could be better used to provide greater motivation and intuition for the proposed CBN approach.
- On that note, I understand the neurological motivation for why early vision may benefit from language modulation, but the argument for why this should be done through the normalization parameters is less well argued (especially in Section 3). The intro mentions the proposed approach reduces over-fitting compared to fine-tuning but doesn't discuss CBN in the context of alternative early-fusion strategies.
- As CBN is a general method, I would have been more convinced by improvements in performance across multiple model architectures for vision + language tasks. For instance, CBN seems directly applicable to the MCB architecture. I acknowledge that needing to backprop through the CNN causes memory concerns which might be limiting.
- Given the argument for early modulation of vision, it is a bit surprising that applying CBN to Stage 4 (the highest level stage) accounts for majority of the improvement in both the VQA and GuessWhat tasks. Some added discussion in this section might be useful. The supplementary figures are also interesting, showing that question conditioned separations in image space only occur after later stages.
- Figures 2 and 3 seem somewhat redundant.
Minor things:
- I would have liked to see how different questions change the feature representation of a single image. Perhaps by applying some gradient visualization method to the visual features when changing the question?
- Consider adding a space before citation brackets.
- Bolding of the baseline models is inconsistent.
- Eq 2 has a gamma_j rather than gamma_c
L34 'to let the question to attend' -> 'to let the question attend'
L42 missing citation
L53 first discussion of batch norm missing citation
L58 "to which we refer as" -> "which we refer to as"
L89 "is achieved a" -> "is achieved through a" | - On that note, I understand the neurological motivation for why early vision may benefit from language modulation, but the argument for why this should be done through the normalization parameters is less well argued (especially in Section 3). The intro mentions the proposed approach reduces over-fitting compared to fine-tuning but doesn't discuss CBN in the context of alternative early-fusion strategies. |
ICLR_2021_860 | ICLR_2021 | Weakness 1. The proposed measurement is not helpful for designing new methods. Note that the mutual information in mixup is lower than baseline while mixup still outperforms baseline. 2. Compared to mixup and cutmix, the improvement reported in Table 2 is marginal. 3. The experiments on ImageNet is unconvincing. Both of mixup and cutmix are worse than baseline, which contradicts the existing results. 4. There lacks the discussion for the saliency based mixup methods, e.g., Puzzle Mix [1]. It is closely related to fmix but equipped with a learnable strategy to obtain patches for mixing.
[1] J-H Kim, et al. Puzzle mix: Exploiting saliency and local statistics for optimal mixup | 2. Compared to mixup and cutmix, the improvement reported in Table 2 is marginal. |
MzDakXdBbM | EMNLP_2023 | This work presents some weaknesses regarding the information provided:
- There is no study around LLM-based data sampling, where Chat-GPT or PaLM-2 detects which queries should be rewritten. There is no measurement of the performance of those models for that classification task.
- There is no data on the nature and quality of the generated queries.
- It is unclear whether the methodology posed is fixing (maintaining all samples and fixing some of them) or filtering and re-generating a subset of the original dataset.
- The size of the generated datasets (CQSumDP, PQSumDP) is unclear. Thus, the results presented could also be unclear.
- Superficial data results analysis.
Also, a big concern about the methodology: re-generating the queries with an LLM that simultaneously sees the document and the summary could generate queries excessively biased towards the reference summaries that could help the summarization models. | - There is no study around LLM-based data sampling, where Chat-GPT or PaLM-2 detects which queries should be rewritten. There is no measurement of the performance of those models for that classification task. |
NIPS_2016_370 | NIPS_2016 | , and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach? | 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. |
NIPS_2020_989 | NIPS_2020 | 1. The notations, equations in the method section are not clear. In Line 110 for instance, the equation $\Upsilon(x)=\{\Upsilon(x)_l\}$ is confusing. 2. The discriminator on the left side of Figure 1 is not the network used by the existing I2I methods (e.g., BicycleGAN concatenates the one-hot vector with the image as the input.) 3. Two highly-related frameworks targeting multi-domain I2I [1,2] are not cited, discussed, and compared in the paper. 4. In the table of Figure 3, it is not clear why training with partial adaptor performs worse than that of training without the adaptor? 5. Since the model is pre-trained from the BigGAN model trained on the natural images, what is the performance of the proposed method on the I2I tasks with unnatural images (e.g., face to artistic portrait)? [1] Lee, et al. "Drit++: Diverse image-to-image translation via disentangled representations.". [2] Choi et al. "StarGAN v2: Diverse image synthesis for multiple domains." | 2. The discriminator on the left side of Figure 1 is not the network used by the existing I2I methods (e.g., BicycleGAN concatenates the one-hot vector with the image as the input.) 3. Two highly-related frameworks targeting multi-domain I2I [1,2] are not cited, discussed, and compared in the paper. |
ARR_2022_233_review | ARR_2022 | Additional details regarding the creation of the dataset would be helpful to solve some doubts regarding its robustness. It is not stated whether the dataset will be publicly released.
1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021) 2) Some aspects of the creation of the dataset are unclear and the authors must address them. First of all, will the author release the dataset or will it remain private?
Are the guidelines used to train the annotators publicly available?
Having a single person responsible for the check at the end of the first round may introduce biases. A better practice would be to have more than one checker for each problem, at least on a subset of the corpus, to measure the agreement between them and, in case of need, adjust the guidelines.
It is not clear how many problems are examined during the second round and the agreement between the authors is not reported.
It is not clear what is meant by "accuracy" during the annotation stages.
3) Additional metrics that may be used to evaluate text generation: METEOR (http://dx.doi.org/10.3115/v1/W14-3348), SIM(ile) (http://dx.doi.org/10.18653/v1/P19-1427).
4) Why have the authors decided to use the colon symbol rather than a more original and less common symbol? Since the colon has usually a different meaning in natural language, do they think it may have an impact?
5) How much are these problems language-dependent? Meaning, if these problems were perfectly translated into another language, would they remain valid? What about the R4 category? Additional comments about these aspects would be beneficial for future works, cross-lingual transfers, and multi-lingual settings.
6) In Table 3, it is not clear whether the line with +epsilon refers to the human performance when the gold explanation is available or to the roberta performance when the golden explanation is available?
In any case, both of these two settings would be interesting to know, so I suggest, if it is possible, to include them in the comparison if it is possible.
7) The explanation that must be generated for the query, the correct answer, and the incorrect answers could be slightly different. Indeed, if I am not making a mistake, the explanation for the incorrect answer must highlight the differences w.r.t. the query, while the explanation for the answer must highlight the similarity. It would be interesting to analyze these three categories separately and see whether if there are differences in the models' performances. | 6) In Table 3, it is not clear whether the line with +epsilon refers to the human performance when the gold explanation is available or to the roberta performance when the golden explanation is available? In any case, both of these two settings would be interesting to know, so I suggest, if it is possible, to include them in the comparison if it is possible. |
CkrqCY0GhW | ICLR_2024 | 1. The reviewer did not get why Section 4 is needed (with such a large space), since most of the introductions are baseline methods. Also, I did not know why RCI/AdaPlanner/Synapse are used for baselines.
2. Only test on 50 compositional web automation tasks. Are the methods and evaluations/insights generalizable to other tasks?
3. A lot of details are shown in the appendix (e.g., task difficulty estimation and data balancing method). | 2. Only test on 50 compositional web automation tasks. Are the methods and evaluations/insights generalizable to other tasks? |
r2nwBwodth | ICLR_2025 | 1. The model proposed in this paper is an adaptation of the MAE model. The masking mechanism draws inspiration from wav2vec and data2vec, aiming to reconstruct the statistical information of the input signal. However, these contributions seem limited in scope.
2. The experimental results of this model are only comparable to data2vec, which was published two years ago, suggesting that its performance may not surpass more recent methods in the field. | 2. The experimental results of this model are only comparable to data2vec, which was published two years ago, suggesting that its performance may not surpass more recent methods in the field. |
BMIjPXooNq | EMNLP_2023 | - The paper only studies one split from each of two synthetic datasets. It’s hard to know whether the conclusions can be translated to other splits and datasets.
- The effectiveness of leveraging dataset cartography for CL is unclear. In most cases, no curriculum appears to perform better or is on par with a strategy that starts the curriculum with hard-to-learn samples. | - The paper only studies one split from each of two synthetic datasets. It’s hard to know whether the conclusions can be translated to other splits and datasets. |
ACL_2017_96_review | ACL_2017 | lack statistics of the datsets (e.g. average length, vocabulary size) the baseline (Moses) is not proper because of the small size of the dataset the assumption "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word" is not supported by the data. - General Discussion: This discussion gives more details about the weaknesses of the paper. Half of the paper is about the new dataset for sarcasm interpretation.
However, the paper doesn't show important information about the dataset such as average length, vocabulary size. More importantly, the paper doesn't show any statistical evidence to support their method of focusing on sentimental words. Because the dataset is small (only 3000 tweets), I guess that many words are rare. Therefore, Moses alone is not a proper baseline. A proper baseline should be a MT system that can handle rare words very well. In fact, using clustering and declustering (as in Sarcasm SIGN) is a way to handle rare words.
Sarcasm SIGN is built based on the assumption that "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word". Table 1 however strongly disagrees with this assumption: the human interpretations are often different from the tweets at not only sentimental words. I thus strongly suggest the authors to give statistical evidence from the dataset that supports their assumption. Otherwise, the whole idea of Sarcasm SIGN is just a hack.
-------------------------------------------------------------- I have read the authors' response. I don't change my decision because of the following reasons: - the authors wrote that "the Fiverr workers might not take this strategy": to me it is not the spirit of corpus-based NLP. A model must be built to fit given data, not that the data must follow some assumption that the model is built on.
- the authors wrote that "the BLEU scores of Moses and SIGN are above 60, which is generally considered decent in the MT literature": to me the number 60 doesn't show anything at all because the sentences in the dataset are very short. And that, if we look at table 6, %changed of Moses is only 42%, meaning that even more than half of the time translation is simply copying, the BLUE score is more than 60.
- "While higher scores might be achieved with MT systems that explicitly address rare words, these systems don't focus on sentiment words": it's true, but I was wondering whether sentiment words are rare in the corpus. If they are, those MT systems should obviously handle them (in addition to other rare words). | - the authors wrote that "the Fiverr workers might not take this strategy": to me it is not the spirit of corpus-based NLP. A model must be built to fit given data, not that the data must follow some assumption that the model is built on. |
ICLR_2021_2110 | ICLR_2021 | 1). My first concern is about the unrealistic assumptions. For example, Eq (5) “channel condition” requires g1 * g2 = C2, which doesn’t make sense to me: there is no intuition, and most existing convs doesn’t satisfy this assumption: (1) regular conv g1=g2=1 != C2 doesn’t satisfy this; (2) spatial separable conv g1=g2=1 != C2. This assumption is critical to arrive equation (7) and (8), but is unclear where this assumption comes from. Due to these unrealistic assumptions, the term “optimal” is also questionable.
2). Second, the CIFAR results show the new layers are not much better than others. As shown in Figure 3, the largest gain is <1%, and sometimes the o-ResNet (~88%) is slightly worse than d-ResNet (which indicates the propose layers might be not "optimal"?) The improvements on ImageNet in Table 4 seem to be promising, but as discussed in DARTS+ and other recent works, the search process of DARTS is often unstable and could potentially have high variance.
3). My another main concern is about the weak baseline. As this paper is study separable convs, it should compare to separable conv based models like MobileNet/FBNet/EfficientNet, rather than the full conv based ResNet. For example, by leveraging depthwise and seprable convs, MobileNetV3 achieves 75.2% ImageNet top-1 accuracy with 219 FLOPs, which is a much stronger baseline than the on in Table4. I highly recommend the authors to conduct their experiments on these baselines.
======== Suggestions
1). Instead of formulating it as a mathematically optimal solution based on unrealistic assumptions, I recommend the authors to conduct more empirical studies on these design choices. For example, the paper only shows the performance results of “optimal” (g1, g2) computed by equation (7), but it would be helpful to show the performance for different (g1, g2) values, and compare them with the “optimal” (g1, g2).
2). I recommend the authors to use the latest MobileNet or EfficientNet (or other separable conv based models) as baselines, and replace their separable convs with the proposed “optimal separable convs”, and compare the performance gains. | 2). Second, the CIFAR results show the new layers are not much better than others. As shown in Figure 3, the largest gain is <1%, and sometimes the o-ResNet (~88%) is slightly worse than d-ResNet (which indicates the propose layers might be not "optimal"?) The improvements on ImageNet in Table 4 seem to be promising, but as discussed in DARTS+ and other recent works, the search process of DARTS is often unstable and could potentially have high variance. |
ICLR_2022_1791 | ICLR_2022 | Weakness:
The whole framework is built upon an assumption that the video can be (near) perfectly decomposed into foreground objects and background, which is a very toy assumption and cannot be used in any complicated real video data.
This paper assumes knowing the underlying physics dynamics (in this case pendulum), which is an unreasonable assumption. Other dynamics, if it exists in the video, will not be able to be modeled.
The experiments are very weak. 1) only one physics dynamics model of the pendulum is shown; 2) For the pendulum, only one real video data is evaluated; 3) the other experiments are done on synthetically generated data, which are also very weak.
How about there is more than one pendulum in the video? How about viewing the pendulum from another viewpoint such that the motion pattern is not a perfect "swing"? These are not shown in the paper at all. | 2) For the pendulum, only one real video data is evaluated; |
NIPS_2022_655 | NIPS_2022 | Weakness: 1. The conclusion seems to be only for GCN. I wonder GAT[1] may exhibit a smaller degree bias, even smaller than graph contrastive learning methods. 2. From Figure 6 in Appendix A, the advantage of graph contrastive learning methods over GCN on Photo dataset is not obvious. The numerical values of their slopes are close. 3. There is a small gap between degree bias and theoretical analysis of clear community structure. 4. The improvement of the proposed method in Table 1 does not seem statistically significant because of high variance. 5. There are some related works designed for degree bias, such as SL-DSGCN[2]. But these methods are not set as baselines in the experimental comparison.
[1] Veličković P, Cucurull G, Casanova A, et al. Graph Attention Networks[C]//International Conference on Learning Representations. 2018. [2] Tang X, Yao H, Sun Y, et al. Investigating and mitigating degree-related biases in graph convoltuional networks[C]//Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020: 1435-1444.
In addition to the limitations mentioned in the paper, the generalization of the conclusion should be taken into consideration. | 1. The conclusion seems to be only for GCN. I wonder GAT[1] may exhibit a smaller degree bias, even smaller than graph contrastive learning methods. |
NIPS_2017_351 | NIPS_2017 | - As I said above, I found the writing / presentation a bit jumbled at times.
- The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed).
- I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify.
- Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly.
- Figure 3 is never referenced unless I missed it.
Some things I'm curious about:
- What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders.
- I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules?
- The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed? | - What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders. |
NIPS_2017_71 | NIPS_2017 | - The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper).
- Table 4 is incomplete. It should include the results for all four datasets.
- In the related work section, the class of binary networks is missing. These networks are also efficient and compact. Example papers are:
* XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016
* Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015
Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the paper. | - The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper). |
NIPS_2017_631 | NIPS_2017 | - I don't understand why Section 2.1 is included. Batch Normalization is a general technique as is the proposed Conditional Batch Normalization (CBN). The description of the proposed methodology seems independent of the choice of model and the time spent describing the ResNet architecture could be better used to provide greater motivation and intuition for the proposed CBN approach.
- On that note, I understand the neurological motivation for why early vision may benefit from language modulation, but the argument for why this should be done through the normalization parameters is less well argued (especially in Section 3). The intro mentions the proposed approach reduces over-fitting compared to fine-tuning but doesn't discuss CBN in the context of alternative early-fusion strategies.
- As CBN is a general method, I would have been more convinced by improvements in performance across multiple model architectures for vision + language tasks. For instance, CBN seems directly applicable to the MCB architecture. I acknowledge that needing to backprop through the CNN causes memory concerns which might be limiting.
- Given the argument for early modulation of vision, it is a bit surprising that applying CBN to Stage 4 (the highest level stage) accounts for majority of the improvement in both the VQA and GuessWhat tasks. Some added discussion in this section might be useful. The supplementary figures are also interesting, showing that question conditioned separations in image space only occur after later stages.
- Figures 2 and 3 seem somewhat redundant.
Minor things:
- I would have liked to see how different questions change the feature representation of a single image. Perhaps by applying some gradient visualization method to the visual features when changing the question?
- Consider adding a space before citation brackets.
- Bolding of the baseline models is inconsistent.
- Eq 2 has a gamma_j rather than gamma_c
L34 'to let the question to attend' -> 'to let the question attend'
L42 missing citation
L53 first discussion of batch norm missing citation
L58 "to which we refer as" -> "which we refer to as"
L89 "is achieved a" -> "is achieved through a" | - I would have liked to see how different questions change the feature representation of a single image. Perhaps by applying some gradient visualization method to the visual features when changing the question? |
fjf3YenThE | ICLR_2024 | 1. The paper lacks a proper related work section, which makes it challenging for readers to quickly grasp the background and understand the previous works. It is crucial to include a comprehensive discussion on related works, especially regarding the variance-reduced ZO hard-thresholding algorithm and the variance reduction aspect.
2. The paper suffers from a lack of necessary references, such as papers on SAGA, SARAH, and SVRG methods. When these methods are initially mentioned, it is essential to provide corresponding references. Additionally, there are errors in the appendix due to bibtex errors, which should be carefully reviewed and corrected.
3. The presentation of baselines and experimental settings in the main text is not well-organized. It is recommended to reorganize this information to improve clarity, especially for readers who are unfamiliar with the baselines and adversarial attacks. Providing a cross-reference to the appendix can also help readers gain a better understanding.
4. The introduction of SAGA-SZHT is missing from the paper, and it cannot be found. It is necessary to either locate the missing information or add it during the rebuttal phase.
5. The authors propose three variants of VR-SZHT by utilizing SVRG, SARAH, and SAGA. It would be beneficial to summarize the advantages of each method in terms of memory storage and convergence rate, similar to what is found in the variance-reduction literature. Providing tables or summaries can help readers compare and understand the individual strengths of these methods.
6. It is well-known that variance-reduction methods can improve the convergence rate of SGD from sublinear to linear under strongly convex and smoothness conditions. It would be interesting to clarify whether VR-SZHT exhibits a similar improvement compared to SZOHT. If there are notable differences, the authors should provide explanations or insights into the reasons behind these variations.
7. I am curious to know if there are any additional technical challenges when integrating VR methods into SZOHT and proving the convergence rate, compared to applying VR methods to traditional finite-sum tasks. The response to this question will not impact my final evaluation of the paper's novelty. However, it will help me gain a better understanding of the paper's correctness and soundness. | 2. The paper suffers from a lack of necessary references, such as papers on SAGA, SARAH, and SVRG methods. When these methods are initially mentioned, it is essential to provide corresponding references. Additionally, there are errors in the appendix due to bibtex errors, which should be carefully reviewed and corrected. |
NIPS_2017_114 | NIPS_2017 | - More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios.
- The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages)
- An evaluation on the more challenging STL-10 dataset would have been welcome. Comments
- The SVNH evaluation suggests that the model is better than pi an temporal ensembling especially in the low-label scenario. With this in mind, it would have been nice to see if you can confirm this on CIFAR-10 too (i.e. show results on CIFAR-10 with less labels)
- I would would have like to have seen what the CIFAR-10 performance looks like with all labels included.
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down.
- I'd be interested to see if the exponential moving average of the weights provides any benefit on it's own, without the additional consistency cost. | - I'd be interested to see if the exponential moving average of the weights provides any benefit on it's own, without the additional consistency cost. |
3Jl0sjmZx9 | ICLR_2024 | 1. It is not clear how to generate the I, C and R', which is critical in this paper. Also, I'm not sure if the quality of generated data by the unified pipeline is good or not, though the authors mention there are professions who help check them.
2. The comparison in Table 3 shows the advantage of the proposed method in this paper, which is mainly due to the domain-specific encoder. However, will the computational complexity be much larger?
3. The dataset (MIMIC-R3G) is not open sourced or not mentioned to open source it in future. | 2. The comparison in Table 3 shows the advantage of the proposed method in this paper, which is mainly due to the domain-specific encoder. However, will the computational complexity be much larger? |
YPvI7SofeZ | ICLR_2025 | - A lot of use of quite specific jargon, which makes it harder to follow and otherwise very clearly written paper.
- The section on multi-step RL is very short and by only referencing results in the appendix, not self-contained material of this paper. The introduction (L 84) claims an “investigation to address larger levels of system noise” which is only pointed to / contained in the appendix.
- RL ‘framework’ is a very broad claim, a physical reward shaping with (as I understand) out-of-bound-step constraints may be effective as application to quantum state control, but is - in the context of RL and QRL - not very novel. ---
Minor Notes:
- Fig 1. Axis labels should be bigger. Split positioning of legend is a bit confusing.
- The terms state populations, rise time etc. could be better explained.
- L163 Typo Two dots after etc.
- L 202 Typo quantu[m] system, see [Section] 3.2.1)
- L 350 Typo That the learned pulses [are] physically
- L 350 Typo e.g.[,]
- Inconsistent use of references (eq / equations), (Fig. / Figure), (App. / Appendix)
- Inconsistent use of cf. inline and (cf. in brackets) | - Fig 1. Axis labels should be bigger. Split positioning of legend is a bit confusing. |
NIPS_2018_612 | NIPS_2018 | Weakness: - Two types of methods are mixed into a single package (CatBoost) and evaluation experiments, and the contribution of each trick would be a bit unclear. In particular, it would be unclear whether CatBoost is basically for categorical data or it would also work with the numerical data only. - The bias under discussion is basically the ones occurred at each step, and their impact to the total ensemble is unclear. For example, randomization as seen in Friedman's stochastic gradient boosting can work for debiasing/stabilizing this type of overfitting biases. - The examples of Theorem 1 and the biases of TS are too specific and it is not convincing how these statement can be practical issues in general. Comment: - The main unclear point to me is whether CatBoost is mainly for categorical features or not. If the section 3 and 4 are independent, then it would be informative to separately evaluate the contribution of each trick. - Another unclear point is the paper presents specific examples of biases of target statistics (section 3.2) and prediction shift of gradient values (Theorem 1), and we can know that the bias can happen, but on the other hand, we are not sure how general these situations are. - One important thing I'm also interested in is that the latter bias 'prediction shift' is caused at each step, and its effect on the entire ensemble is not clear. For example, I guess the effect of the presented 'ordered boosting' could be related to Friedman's stochastic gradient boosting cited as [13]. This simple trick is just apply bagging to each gradient-computing step of gradient boosting, which would randomly perturb the exact computation of gradient. Each step would be just randomly biased, but the entire ensemble would be expected to be stabilized as a whole. Both XGBoost and LightGBM have this stochastic/bagging option, we can use it when we need it. Comment After Author Response: Thank you for the response. I appreciate the great engineering effort to realize a nice & high-performance implementations of CatBoost. But I'm still not sure that how 'ordering boosting', one of two main ideas of the paper, gives the performance improvement in general. As I mentioned in the previous comment, the bias occurs at each base learner h_t. But it is unclear that how this affects the entire ensemble F_t that we actually use. Since each h_t is a "weak" learner anyway, any small biases can be corrected to some extent through the entire boosting process. I couldn't find any comments for this point in the response. I understand the nice empirical results of Tab. 3 (Ordered vs. Plain gradient values) and Tab. 4 (Ordered TS vs. alternative TS methods). But I'm still unsure whether this improvement comes only from the 'ordering' ideas to address two types of target leakages. Because the comparing models have many different hyper parameters and (some of?) these are tuned by Hyperopt, so the improvement can come not only from addressing the two types of leakage. For example, it would be nice to have something like the following comparisons o focus only on two ideas of ordered TS and ordered boosting in addition: 1) Hyperopt-best-tuned comparisons of CatBoost (plain) vs LightGBM vs XGboost (to make sure no advantages exists for CatBoost (plain) ) 2) Hyperopt-best-tuned comparisons of CatBoost without column sampling + row sampling vs LightGBM/XGBoost without column sampling + row sampling 3) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered TS without ordered boosting vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off) 4) Hyperopt-best-tuned comparisons of CatBoost(plain) + ordered boosting without ordered TS vs CatBoost(plain) (any other randomization options, column sampling and row sampling, should be off) | - The bias under discussion is basically the ones occurred at each step, and their impact to the total ensemble is unclear. For example, randomization as seen in Friedman's stochastic gradient boosting can work for debiasing/stabilizing this type of overfitting biases. |
NIPS_2022_2513 | NIPS_2022 | Weakness:
1.The tech contribution of MicroSeg is very limited. Region proposal network for class-agnostic detecting novel objects is already widely used, such as [a].
2.SSUL uses the off-the-shelf saliency-map detector to detect unseen classes while the paper uses the pretrained Mask2Former to produce the region proposals. This may introduce an unfair comparison in both data and model size. Mask2former is additionally trained on COCO and has much larger parameters than the off-the-shelf detector. What if SSUL also adopts Mask2Former to detect unseen classes? Or what if SSUL can generate the object proposals in an unsupervised way w/o additional lots of data or heavy model, such as [b].
3.Missing experiments of using other region proposals networks instead of using Mask2Former, such as the RPN in Mask R-CNN? Will it influence the final model performance a lot?
4.Missing the speed comparison and model parameters with other methods. What are the model sizes of the proposed MicroSeg combined with Mask2Former?
[a] Gu, Xiuye, et al. "Open-vocabulary object detection via vision and language knowledge distillation. ICLR, 2022.
[b] Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity. CVPR, 2022. | 4.Missing the speed comparison and model parameters with other methods. What are the model sizes of the proposed MicroSeg combined with Mask2Former? [a] Gu, Xiuye, et al. "Open-vocabulary object detection via vision and language knowledge distillation. ICLR, 2022. [b] Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity. CVPR, 2022. |
ARR_2022_317_review | ARR_2022 | - Lack of novelty: - Adversarial attacks by perturbing text has been done on many NLP models and image-text models. It is nicely summarized in related work of this paper. The only new effort is to take similar ideas and apply it on video-text models.
- Checklist (Ribeiro et. al., ACL 2020) had shown many ways to stress test NLP models and evaluate them. Video-text models could also be tested on some of those dimensions. For instance on changing NER.
- If you could propose any type of perturbation which is specific to video-text models (and probably not that important to image-text or just text models) will be interesting to see. Otherwise, this work, just looks like a using an already existing method on this new problem (video-text) which is just coming up.
- Is there a way to take any clue from the video to create harder negatives. | - If you could propose any type of perturbation which is specific to video-text models (and probably not that important to image-text or just text models) will be interesting to see. Otherwise, this work, just looks like a using an already existing method on this new problem (video-text) which is just coming up. |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. |
NIPS_2019_962 | NIPS_2019 | for exceptions. + Experiments are convincing. + To the best of my knowledge, the idea of using unsupervised keypoints for reinforcement learning is novel and promising. One can expect a variety of follow-up work. + Using keypoints as input state of a Q function is reasonable and reduces the dimensionality of the problem. + Reducing the search space to the most controllable keypoints instead of raw actions is a promising idea. Weaknesses: 1. Overstated claim on generalization In the introduction (L17-L22), the authors motivate their work by explaining that reinforcement learning approaches are limited because it is difficult to re-purpose task-specific representations, but that this is precisely what humans do. From this, one could have expected this paper to address this issue by training and applying the detector network across multiple games, re-purposing their keypoint detector. This would have be useful to verify that the learnt representations generalize to new contexts. But unfortunately, it hasn't been done, so it is a bit of an over-statement. Could this be a limitation of the method because the number of keypoints is fixed? 2. Deep RL that matters Experiments should be run multiple times. A longstanding issue with deep RL is their reproducibility and the significance of their improvements. It has been recently suggested that we need a community effort towards reproducibility [a], which should also be taken into account in this paper. Among the considerations, one critical thing is running multiple experiments and reporting the statistics. [a] Henderson, Peter, et al. "Deep reinforcement learning that matters." Thirty-Second AAAI Conference on Artificial Intelligence. 2018. 3. The choice of testing environment is not well motivated. Levels are selected without a clear rationale, with only a vague motivation in L167. This makes me suspect that they might be cherry picks. Authors should provide a more clear justification. This could be related to the next weakness that I will discuss, which is understandable. Even if this is the case, this should then be explicit with experimental evidence. 4. Keypoints are limited to moving objects A practical limitation comes from the fact that the keypoints are learnt from the moving parts of the image. As identified by the authors, the first resulting limitation is that the method assumes a fixed background, so that only meaningful objects move and can be detected as keypoints. Learning to detect keypoints based on what objects are moving has some limitations when these keypoints are supposed to be used as the input state of a Q function. One can imagine a game where some obstacles are immobile. The locations of these obstacles are important in order to make decisions but in this work, they would be ignored. It is therefore important that these limitations are also explicitly demonstrated. 5. Dealing with multiple instances. Because "PNet" generates one heatmap per keypoint, each keypoint detector "specializes" into a certain type of keypoint. This is fine for some applications (e.g. face keypoints) where only one instance of each kind of keypoint exists in each image. But there are games (e.g. Frostbite) where a lot of keypoints look exactly the same. And still, the detector is able to track them with consistency (as shown in the supplementary video). This is intriguing, as one could expect the detector to detect several keypoints at the same location, instead of distributing them almost perfectly. Is it because the receptive field is large? 6. Other issues - In section 3, the authors could improve the explanation of why the loss promotes the detection of meaningful keypoints. It is not obvious at first why the detector needs to detect keypoints to help with the reconstruction. - Figure 1: Referring to [15] as "PointNet" is confusing when this name doesn't appear anywhere in this paper ([15]) and there exists another paper with this name. See "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. - Figure 1: The figure describes two "stop grad", but there is no mention or explanation of it in the text or caption. This is not theoretically motivated either, because most of the transported feature map comes from the source image (all the pixels that are not close from source or target keypoints). Blocking these gradients would block most of the gradients that can be used to train the "Feature CNN" and "PNet". - L93: "by marginalising the keypoint-detetor feature-maps along the image dimensions (as proposed in [15])". This would be better explained and self-contained by saying that a soft-argmax is used. - L189: "Distances above a threshold (ε) are excluded as potential matches". What threshold value is used? - What specific augmentation techniques are used during the training of the detector? - Figure 4: it is not clear what the meaning of "1-200 frames" is and how the values are computed. Why are the precision and recall changing with the trajectory length? Also, what is an "action repeat"? - Figure 6: the scores should be normalized (and maybe displayed as a plot) for easier comparison. ==== POST REBUTTAL ==== The rebuttal is quite convincing, and have addressed my concerns. I would like to raise the rating of the paper to 8 :-) I'm happy that my worries were just worries. | - L93: "by marginalising the keypoint-detetor feature-maps along the image dimensions (as proposed in [15])". This would be better explained and self-contained by saying that a soft-argmax is used. |
NIPS_2018_947 | NIPS_2018 | weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, which seems really small. In comparison, when I look at the SMT Competition of 2017, specifically the QF_NIA division (http://smtcomp.sourceforge.net/2017/results-QF_NIA.shtml?v=1500632282), I find that all 5 solvers listed require 300-700 seconds. The same can be said about QF_BF and QF_NRA (links to results here http://smtcomp.sourceforge.net/2017/results-toc.shtml). While the learned model definitely improves over Z3 under the time limit of 10 seconds, the discrepancy with the competition results on similar formula types is intriguing. Can you please clarify? I should note that while researching this point, I found that the SMT Competition of 2018 will have a "10 Second wonder" category (http://smtcomp.sourceforge.net/2018/rules18.pdf). - Pruning via equivalence classes: I could not understand what is the partial "current cost" you mention here. Thanks for clarifying. - Figure 3: please annotate the axes!! - Bilinear model: is the label y_i in {-1,+1}? - Dataset statistics: please provide statistics for each of the datasets: number of formulas, sizes of the formulas, etc. - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? - Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol. 7. 2007. [b] Khalil, Elias Boutros, et al. "Learning to Branch in Mixed Integer Programming." AAAI. 2016. Minor typos: - Line 283: looses -> loses | - Pruning via equivalence classes: I could not understand what is the partial "current cost" you mention here. Thanks for clarifying. |
NIPS_2022_2523 | NIPS_2022 | The main contribution of this paper is to introduce the upsample operation into the ResTv1 to compensate for the lost information from the downsample. Though this simple design can provide the performance benefit, the generalizability of this design seems narrow, does it only work well for the specific efficient Transformer model with the downsample operation?
For Equ.2, is the Norm operation also eliminated along with the Conv? If so, the paper does not mention it, which is somewhat confusing. 3) I think a more detailed description of Fig.3 is needed for a better understanding, e.g., the meaning of coordinates and curves.
Yes, the authors address the efficiency to some extent. They point out the gap between the theoretical FLOPs and the actual speed, and consider the actual running speed more when designing the model. | 3) I think a more detailed description of Fig.3 is needed for a better understanding, e.g., the meaning of coordinates and curves. Yes, the authors address the efficiency to some extent. They point out the gap between the theoretical FLOPs and the actual speed, and consider the actual running speed more when designing the model. |
NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)?
* How does this setting relate to question answering or visual question answering?
* How does the model perform on the same train data it's seen already? How much does it overfit?
* How hard is it to find intuitive attention examples as in figure 4?
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
* The related works section would be better understood knowing how the model works, so it should be presented later. | * How does the model perform on the same train data it's seen already? How much does it overfit? |
NIPS_2020_1425 | NIPS_2020 | 1. The proposed method may not be able to accelerate the whole framework with GPU, as the edge prediction can not be computed in parallel. The title may need to modify, otherwise more experiment results on speed should be provided to valid the statement "accelerating self-attention". 2. Need more detailed analysis on how much memory can be saved compared to other methods, such as XLNet or reversible Transformer/CNN (REFORMER: THE EFFICIENT TRANSFORMER). The explored datasets actually do not need huge memory. It would be better to explore some tasks with much longer sequences. 3. The number of parameters in Table 1 should be listed for fair comparison. 4. No experiment result or ablation study on "Variants of Edge Predictor". It's unclear which method plays the key role. | 2. Need more detailed analysis on how much memory can be saved compared to other methods, such as XLNet or reversible Transformer/CNN (REFORMER: THE EFFICIENT TRANSFORMER). The explored datasets actually do not need huge memory. It would be better to explore some tasks with much longer sequences. |
QoiOmXy3A7 | EMNLP_2023 | 1. The paper is hard to read; the abstract and introduction are well written however, the method and evaluation is hard to follow (see suggestions section).
2. Few details like numbers in the Table 2 are for Gen model with prototype or both are missing.
3. No other baselines except for random is evaluated, making it difficult to evaluate how good the method is as compared to others. Meta-learning also conceptually creates prototypes and an instance may belong to one of the prototypes, may be some baselines could use that intuition.
4. Ablations to demonstrate the importance of different components and loss functions are missing. | 3. No other baselines except for random is evaluated, making it difficult to evaluate how good the method is as compared to others. Meta-learning also conceptually creates prototypes and an instance may belong to one of the prototypes, may be some baselines could use that intuition. |
NIPS_2020_675 | NIPS_2020 | * Monotonic Alignment Search algorithm used in training is deterministic and the training procedure doesn't represent uncertainty over possible alignments. At sampling time, the duration prediction module is also deterministic. Since it was trained with mean-squared-error, it is unable to produce natural and varied prosody since it is equivalent to taking the mean of a univariate Gaussian modeling the duration of each input token. Figure 4 is illustrative of this lack of diversity. The durations across multiple samples from the model are identical. * Section 3.1: "The modified objective does not guarantee the global solution of Equation 3, but it still provides a good lower bound of the global solution." How do we know this is a good lower bound? Please explicate. * Controllability in TTS is only of interest to TTS practitioners if the dimension of control offered is useful for downstream tasks (e.g. prosody/style transfer from a reference, control of emotion/affect/valence/arousal, achieving a desired meaning/prosody e.g. skepticism/confidence/uncertainty/...). While it is interesting that the sampling temperature of the latent prior seems to directly control pitch, I do not think that control via such an uncalibrated parameter is useful for a downstream task without a lot of work. The role pitch plays in speech is highly complicated and varies across language -- in some languages, it corresponds to lexical meaning, in others it's more prosodic. I do not think it's appropriate to describe Glow-TTS as controllable in its current form and would require discussion / comparison to other works that address this topic. Duration control is relevant and useful for downstream tasks. However, the method of control described in the manuscript is not novel (it has been used for over a decade in parametric TTS before end-to-end TTS became popular) and would be straightforward to implement with any TTS system with explicit durations. * I am unsure of the authors' claim that Glow-TTS produces diverse samples. Especially in light of the deterministic duration predictor and MAS algorithm, I would be surprised to see a model like this produce diverse prosody. Figure 4 is particularly revealing, since the pitch and durations across multiple samples from the model look and sound largely identical. To help support this claim, please include multiple samples from the model at various temperatures for a variety of text samples. Ideally the text would have some ambiguity over the intended meaning and the samples would be noticeably different from each other while remaining natural. * The multi-speaker results sound somewhat unnatural and the lower MOS confirms it. It sound almost as if the duration prediction module is underfit. The speaker dependent variation section presents a plot of variation across speakers but the audio samples undermine this. Do you know why this is? Please include a Tacotron 2 baseline in Table 2. | * Monotonic Alignment Search algorithm used in training is deterministic and the training procedure doesn't represent uncertainty over possible alignments. At sampling time, the duration prediction module is also deterministic. Since it was trained with mean-squared-error, it is unable to produce natural and varied prosody since it is equivalent to taking the mean of a univariate Gaussian modeling the duration of each input token. Figure 4 is illustrative of this lack of diversity. The durations across multiple samples from the model are identical. |
ICLR_2023_2312 | ICLR_2023 | 1. Literature Review
The paper regrettably fails to acknowledge a vast body of related literature, on (i) intention-conditioned trajectory prediction, (ii) variational graph methods for trajectory prediction, and (iii) models that explicitly model social interactions for forecasting. At the very least, these references ought to be mentioned and discussed for a diligent representation of the research space, even if the methods are not directly compared against.
(i) Intention-Conditioned Trajectory Prediction:
[R1, R2, R3] talk about intention-conditioned trajectory prediction for autonomous vehicles. Apart from the data the methods are applied to, the architectures can be applicable to, and are relevant for, the problem being addressed here. Crucially, the DROGON paper defines intention explicitly (more on this in Weakness 2. below).
(ii) Variational Graph Methods:
[R4] from the Neurips I Can't Believe It's Not Better Workshop explicitly deals with graph conditional variational methods for multi-agent trajectory prediction. The results in that paper are very relevant for this research area and should be included.
(iii) Encoding Social Interactions:
Graph and other stochastic methods that encode social interactions between agents have been long applied to trajectory and behavior forecasating problems. [R5] explicitly incorporates a spatiotemporal graph for incorporating social interactions between agents. [R6] more recently explicitly takes a meta-learning approach for modeling the dynamics unique to a group for probabilistic forecasting. A sports team is a group, and if each team is viewed as having unique social dynamics resulting from the team's strategy then [R6]'s core modeling idea is directly applicable. The cue in [R6] terms is simply player location here. Their modeling of social influence of other agents is also permutation invariant, a limitation this paper claims about existing methods. References:
[R1] DROGON: A Trajectory Prediction Model based on Intention-Conditioned Behavior Reasoning - Choi et al.
[R2] Intention-Driven Trajectory Prediction for Autonomous Driving - Fan et al.
[R3] LOKI: Long Term and Key Intentions for Trajectory Prediction - Girase et al.
[R4] Graph Conditional Variational Models: Too Complex for Multiagent Trajectories? - Rudolph et al.
[R5] Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction - Mohamed et al.
[R6] Social Processes: Self-Supervised Meta-Learning over Conversational Groups for Forecasting Nonverbal Social Cues - Raman et al.
2. Unsupported claims and definitions
The paper doesn't actually define agent intentions and causality in the specific setting, so there is no reasonable way to evaluate whether the proposed method actually models intentions. The intention-conditioned trajectory works I've mentioned talk about intention over long- and short- time horizons, where e.g. the former is in terms of goal destinations. Here the paper is talking about team sports with player intentions but simply states that this results from communication. What does intention mean here? Also, the paper claims to model causal relationships, but I can't see any explicit causal factors modeled of learned in the graph structure. There might be other exogenous variables explaining trajectory behavior.
3. Notation
There are a few notational errors. For instance, the variable used for the sequence cannot be the same as the individual elements: x < t = [ x 1 , . . . ]
. See [R4] for this. In many places there exist grammatical errors and incomplete sentences. Please do a pass to fix these. | 3. Notation There are a few notational errors. For instance, the variable used for the sequence cannot be the same as the individual elements: x < t = [ x 1 , . |
NIPS_2016_287 | NIPS_2016 | weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition". | 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. |
ACL_2017_726_review | ACL_2017 | - Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-the-art accuracies” on ATIS and GeoQuery, which doesn’t seem to be the case (8 points difference from Liang et. al., 2011)). And I do not necessarily think that getting SOTA numbers should be the focus of the paper, it has its own significant contribution. I would like to see this paper at ACL provided the authors tone down their claims, in addition I have some questions for the authors.
- What do the authors mean by minimal intervention? Does it mean minimal human intervention, because that does not seem to be the case. Does it mean no intermediate representation? If so, the latter term should be used, being less ambiguous.
- Table 6: what is the breakdown of the score by correctness and incompleteness?
What % of incompleteness do these queries exhibit?
- What is expertise required from crowd-workers who produce the correct SQL queries? - It would be helpful to see some analysis of the 48% of user questions which could not be generated.
- Figure 3 is a little confusing, I could not follow the sharp dips in performance without paraphrasing around the 8th/9th stages. - Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers?
I thank the authors for their response. | - Figure 3 is a little confusing, I could not follow the sharp dips in performance without paraphrasing around the 8th/9th stages. |
NIPS_2018_612 | NIPS_2018 | weakness is not including baselines that address the overfitting in boosting with heuristics. Ordered boosting is non-trivial, and it would be good to know how far simpler (heuristic) fixes go towards mitigating the problem. Overall, I think this paper will spur new research. As I read it, I easily came up with variations and alternatives that I wanted to see tried and compared. DETAILED COMMENTS The paper is already full of content, so the ideas for additional comparisons are really suggestions to consider. * For both model estimations, why start at example 1? Why not start at an example that is 1% of the way into the training data, to help reduce the risk of high variance estimates for early examples? * The best alternative I've seen for fixing TS leakage, while reusing the data sample, uses tools from differential privacy [1, 2]. How does this compare to Ordered TS? * Does importance-sampled voting [3] have the same target leakage problem as gradient boosting? This algorithm has a similar property of only using part of the sequence of examples for a given model. (I was very impressed by this algorithm when I used it; beat random forests hands down for our situation.) * How does ordered boosting compare to the subsampling trick mentioned in l. 150? * Yes, fixes that involve bagging (e.g., BagBoo [4]) add computational time, but so does having multiple permuted sequences. Seems worth a (future?) comparison. * Why not consider multiple permutations, and for each, split into required data subsets to avoid or mitigate leakage? Seems like it would have the same computational cost as ordered boosting. * Recommend checking out the Wilcoxon signed rank test for testing if two algorithms are significantly different over a range of data sets. See [6]. * l. 61: "A categorical feature..." * l. 73: "for each categorical *value*" ? * l. 97: For clarity, consider explaining a bit more how novel values in the test set are handled. * The approach here reminds me a bit of Dawid's prequential analysis, e.g., [5]. Could be worth checking those old papers to see if there is a useful connection. * l. 129: "we reveal" => "we describe" ? * l. 131: "called ordered boosting" * l. 135-137: The "shift" terminology seems less understandable than talking about biased estimates. * l. 174: "remind" => "recall" ? * l. 203-204: "using one tree structure"; do you mean shared \sigma? * Algorithm 1: only one random permutation? * l. 237: Don't really understand what is meant by right hand side of equality. What is 2^j subscript denoting? * l. 257: "tunning" => "tuning" * l. 268: ", what is expected." This reads awkwardly. * l. 311: This reference is incomplete. REFERENCES [1] https://www.slideshare.net/SessionsEvents/misha-bilenko-principal-researcher-microsoft [2] https://www.youtube.com/watch?v=7sZeTxIrnxs [3] Breiman (1999). Pasting small votes for classification in large databases and on-line. Machine Learning 36(1):85--103. [4] Pavlov et al. (2010). BagBoo: A scalable hybrid bagging-the-boosting model. In CIKM. [5] Dawid (1984). Present position and potential developments: Some personal views: Statistical Theory: The Prequential Approach. Journal of the Royal Stastical Society, Series A, 147(2). [6] Demsar (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30. | * l. 131: "called ordered boosting" * l. 135-137: The "shift" terminology seems less understandable than talking about biased estimates. |
ARR_2022_141_review | ARR_2022 | of the paper: • The mix of different approaches and tasks leads to a confusion for readers.
• The logic flow of the paper needs to be improved.
• Not sure why the proposed model work for the experimental datasets, since those datasets do not have such textual supervision as co-citing sentences.
• There are no comparison and contrast between proposed approaches and baselines.
I strongly suggest to improve the narrative of the paper. | • Not sure why the proposed model work for the experimental datasets, since those datasets do not have such textual supervision as co-citing sentences. |
NIPS_2018_559 | NIPS_2018 | - my only objection to the paper is that it packs up quite a lot of information, and because of the page-limits it doesnât include all the details necessary to reconstruct the model. This means cuts were made, some of which are not warranted. Sure, the appendix is there, but the reader needs to get all the necessary details in the main body of the paper. I quite enjoyed the paper, I think itâs definitely NIPS material, but it needs some additional polishing. I added my list of suggestions I think would help improve readability of the paper at the end of the review. Questions: - I might have missed the point of section 4.2 - I see it as a (spacewise-costly) way to say âprograms (as opposed to specs) are a better choice as they enable generalization/extrapolation via changing variable valuesâ? What is the experiment there? If itâs just to show that by changing variables, one can extrapolate to different images, I would save space on 1/2 of Figure 9 and focus on lacking parts of the paper (190 - extrapolations produced by our system - how did the system produce those extrapolations? was there a human that changed variable values or is there something in the system enabling this?) - What is + in Figure 3? If elementwise addition, please specify that - Figure 4 caption explains why the number N of particles is not the same across models. However, that still doesnât stop me from wondering whether there is a significant difference in performance in case all models are using the same number of particles. Do you have that information? - Line 54 mentions that the network can âderenderâ images with beam search. Is beam search used or not? What is the size of the beam? Is beam search used for each of the N particles? - From what I understood, the model does not have access to previously generated commands. Can you confirm that? - The order of (generated) specs is irrelevant for rendering, but it is for the generation process. How do you cope with that? Do you use a particular order when training the model or do you permute the specs? - Table 2 - â;â denotes OR, right? I would personally use the BNF notation here and use â|â - 153 - âminimized 3 using gradient descentâ - how did you treat the fact that min is not differentiable? - Table 5 - this is evaluated on which problems exactly? The same 100 on which the policy was trained? - Please provide some DeepCoder -style baseline details - the same MLP structure? Applied to which engine? A search algorithm or Sketch? - I find 152 - 153 unclear - how did you synthesize minimum cost programs for each \sigma ? \sigma represents a space of possible solutions, no? - Please provide more details on how you trained L_learned - what is the dataset you trained it on (randomly selected pairs of images, sampled from the same pool of randomly generated images, with a twist)? How did you evaluate its performance? What is the error of that model? Was it treated as a regression or as a classification task? - Figure 7 introduces IoU. Is that the same IoU used in segmentation? If so, how does that apply here? Do you count the union/intersection of pixels? Please provide a citation where a reader can quickly understand that measure. Suggestions: - full Table 3 is pretty, but it could easily be halved to save space for more important (missing!) details of the paper - the appendix is very bulky and not well structured. If you want to refer to the appendix, I would strongly suggest to refer to sections/subsections, otherwise a reader can easily get lost in finding the details - Section 2.1 starts strong, promising generalization to real hand drawings, but in the first sentence the reader realizes the model is trained on artificial data. Only in line 91 it says that the system is tested on hand-written figures. I would emphasize that from the beginning. - Line 108 - penalize using many different numerical constants - please provide a few examples before pointing to the supplement. - Line 154 - a bit more detail of the bilinear model would be necessary (how low-capacity?) - 175-176 - see supplement for details. You need to provide some details in the body of the paper! I want to get the idea how you model the prior from the paper and not the supplement - Table 5 - thereâs a figure in the appendix which seems much more informative than this Table, consider using that one instead The related work is well written, I would just suggest adding pix2code (https://arxiv.org/abs/1705.07962) and SPIRAL (https://arxiv.org/abs/1804.01118) for completeness. UPDATE: I've read the author feedback and the other reviews. We all agree that the paper is dense, but we seem to like it nevertheless. This paper should be accepted, even as is because it's a valuable contribution, but I really hope authors will invest additional effort into clarifying the parts we found lacking. | - Line 108 - penalize using many different numerical constants - please provide a few examples before pointing to the supplement. |
NIPS_2018_114 | NIPS_2018 | 1. Generalizability. In general, I think the authors need to show how this approach can work on more problems. For example, it looks to me that for most deep net problem A2 is not true. Also, some empirical verification of assumption A1 alone on other problems would be useful to convince me why this approach can generalize. 2. Stability evaluation/analysis is missing. How sensitive is the performance to the lingering radius (i.e. theta or equivalently delta(x, i))? Could the authors give some theoretical analysis or some empirical evaluation? 3. Memory consumption. For many real-world applications, the stochastic gradient methods mentioned in this paper are not acceptable due to huge memory consumption. Could the authors explain how to generalize this approach to other methods, e.g. stochastic gradient descent with fixed batch size? I would expect the growing number of lingering stochastic gradients to be an issue. Some Typos: L532: difference --> different L540: report report --> report ------- Response to Author's feedback: 1. For A1, I agree that if we have explicit written f_i(x) then we can compute radius in a easy way. My original concern is when the function is too complicated that the radius do not have a easy close form, then can we at least empirically evaluate the radius. I guess if the authors want to focus outside DL then this might not be a big issue anymore. 2. I think my concern is no longer a issue if the function can be written explicitly. 3. Originally, I was imagining a deep net setting, where storing O(nd) is not acceptable. And I have concerns about the overhead of computing this on the fly. But I guess it's not a problem whenever SVRG is acceptable. | 1. For A1, I agree that if we have explicit written f_i(x) then we can compute radius in a easy way. My original concern is when the function is too complicated that the radius do not have a easy close form, then can we at least empirically evaluate the radius. I guess if the authors want to focus outside DL then this might not be a big issue anymore. |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. |
ACL_2017_553_review | ACL_2017 | Very few--possibly avoid some relatively "empty" statements: 191 : For example, if our task is to identify words used similarly across contexts, our scoring function can be specified to give high scores to terms whose usage is similar across the contexts.
537 : It is educational to study how annotations drawn from the same data are similar or different.
- General Discussion: In the first sections I was not sure that much was being done that was new or interesting, as the methods seemed very reminiscent of previous methods used over the past 25 years to measure similarity, albeit with a few new statistical twists, but conceptually in the same vein. Section 5, however, describes an interesting and valuable piece of work that will be useful for future studies on the topic. In retrospect, the background provided in sections 2-4 is useful, if not necessary, to support the experiments in section 5. In short, the work and results described will be useful to others working in this area, and the paper is worthy of presentation at ACL.
Minor comments: Word, punctuation missing?
264 : For word annotations, we used PPMI, SVD, and SGNS (skipgram with negative sampling from Mikolov et al. (2013b)) word vectors released by Hamilton et al. (2016).
Unclear what "multiple methods" refers to : 278 : some words were detected by multiple methods with CCLA | 278 : some words were detected by multiple methods with CCLA |
NIPS_2017_235 | NIPS_2017 | weakness even if true but worth discussing in detail since it
could guide future work.) [EDIT: I see now that this is *not* the case, see response below]
I also feel that the discussion of Chow-Liu is missing a very
important aspect. Chow-Liu doesn't just correctly recover the
true structure when run on data generated from a tree. Rather,
Chow-Lui finds the *maximum-likelihood* tree for data from an
*arbitrary* distribution. This is a property that almost all
follow-up work does not satisfy (Srebro showed bounds). The
discussion in this paper is all true, but doesn't mention that
"maximum likelihood" or "model robustness" issue at all, which
is hugely important in practice.
For reference: The basic result is that given a single node $u$,
and a hypothesized set of "separators" $S$ (neighbors of $u$) then
there will be some set of nodes $I$ with size at most $r-1$ such
that $u$ and $I$ have positive conditional mutual information.
The proof of the central result proceeds by setting up a
"game", which works as follows:
1) We pick a node $X_u$ to look at.
2) Alice draws two joint samples $X$ and $X'$.
3) Alice draws a random value $R$ (uniformly from the space of
possible values of $X_u$
4) Alice picks a random set of neighbors of $X_u$, call them $X_I$.
5) Alice tells Bob the values of $X_I$
6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his
wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing
happens if both or neither are true.
Here I first felt like I *must* be missing something, since
this is just establishing that $X_u$ has mutual information
with it's neighbors. (There is no reference to the "separator"
set S in the main result.) However, it later appears that this
is just a warmup (regular mutual information) and can be
extended to the conditional setting.
Actually, couldn't the conditional setting itself be phrased
as a game, something like
1) We pick a node $X_u$ and a set of hypothesized "separators"
$X_S$ to look at.
2) Alice draws two joint samples $X$ and $X'$ Both are
conditioned on the some random value for $X_S$.
3) Alice draws a random value $R$ (uniformly from the space of
possible values of $X_u$
4) Alice picks a random set of nodes (not including $X_u$ or $X_S$, call them $X_I$.
5) Alice tells Bob the values of $X_I$
6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his
wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing
happens if both or neither are true.
I don't think this adds anything to the final result, but is
an intuition for something closer to the final goal.
After all this, the paper discusses an algorithm for greedily
learning an MRF graph (in sort of the obvious way, by
exploiting the above result) There is some analysis of how
often you might go wrong estimating mutual information from
samples, which I appreciate.
Overall, as far as I can see, the result appears to be
true. However, I'm not sure that the purely theoretical result
is sufficiently interesting (at NIPS) to be published with no
experiments. As I mentioned above, Chow-Liu has the major
advantage of finding the maximum likelihood solution, which the current method
does not appear to have. (It would violate hardness results
due to Srebro.) Further, note that the bound given in Theorem
5.1, despite the high order, is only for correctly recovering
the structure of a single node, so there would need to be
another lever of applying the union bound to this result with
lower delta to get a correct full model.
EDIT AFTER REBUTTAL:
Thanks for the rebuttal. I see now that I should understand $r$ not as the maximum clique size, but rather as the maximum order of interactions. (E.g. if one has a fully-connected Ising model, you would have r=2 but the maximum clique size would be n). This answers the question I had about this being a generalization of Bressler's result. (That is, this paper's is a strict generalization.) This does slightly improve my estimation of this paper, though I thought this was a relatively small concern in any case. My more serious concerns are that a pure theoretical result is appropriate for NIPS. | 2) Alice draws two joint samples $X$ and $X'$ Both are conditioned on the some random value for $X_S$. |
ICLR_2022_2796 | ICLR_2022 | - The experiments are on very small datasets and in toy settings. - Some parts of the theory are insufficiently explored. For example, under what scenarios can we expect invertibility of E z [ G ( z ) G ( z ) T ]
? Perhaps this could be shown to hold in simple settings, e.g., G ( z ) := ReLU ( W z )
with the WDC assumption. Tools from NTK theory could potentially be helpful here, since the entries are of the form E z [ ReLU ( ⟨ w i , z ⟩ ) ReLU ( ⟨ w j , z ⟩ ) ]
. - I have some concerns about some of the claimed relevance of this approach to transfer learning. In particular, I was a bit confused by the experimental setup of the generative prior derived from MNIST VAE. What is the broader claim about the relationship between this framework for dictionary learning with generative coefficient priors and transfer learning, and how does this experiment comment on this relationship? Is the claim that dictionary learning with generative priors can be phrased as learning the last linear layer of a generative model?
Clarity: Overall, the paper was fairly well-written and easy to follow in most parts. Here are some typos that I found:
Top of page 2: “atoms, simultaneously” -> “atoms, while simultaneously”
Bottom of page 5: a transpose on E [ G ( z ) G ( z ) ]
is missing and a parenthesis on E [ A s G ( z ) − A ∗ G ( z ∗ ) G ( z ∗ ) T ]
as well.
In the appendix, it may be best to use notation that shows Π i = d 1 W i , + , z
’s dependence on z
, e.g. W z := Π i = d 1 W i , + , z
In the update rule in Algorithm1, should the projection operator be applied to A s
or to A s − η g ^ s ?
Novelty and significance: To the reviewer’s knowledge, this work is the first to incorporate generative neural network priors in the dictionary learning setting. In terms of analysis, the theory is a combination of previous work on dictionary learning from Arora et al [1], along with theory from Bora et al [2] and Hand and Voroninski [3]. While new theoretical tools aren’t provided, the combination of these ideas is novel.
Minor comments:
For convergence guarantees of optimization over z
in compressive sensing or denoising, one may want to cite Huang et al [4] and Heckel et al [5].
Recent work has shown that the logarithmic factor in the WDC can be relaxed to a constant factor [6]. References:
[1] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Motiga. Simple, effiicent, and neural algorithms for sparse coding. JMLR
[2] Ashish Bora, Ail Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative models. ICML
[3] Paul Hand and Vladislav Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. COLT
[4] Wen Huang, Reinhard Heckel, Paul Hand, Vladislav Voroninski. A provably convergent scheme for compressive sensing under random generative priors. Journal of Fourier Analysis and Applications
[5] Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski. Rate-optimal denoising with deep neural networks. Information and Inference
[6] Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis. Constant-expansion suffices for compressed sensing with generative priors. NeurIPS | - Some parts of the theory are insufficiently explored. For example, under what scenarios can we expect invertibility of E z [ G ( z ) G ( z ) T ] ? Perhaps this could be shown to hold in simple settings, e.g., G ( z ) := ReLU ( W z ) with the WDC assumption. Tools from NTK theory could potentially be helpful here, since the entries are of the form E z [ ReLU ( ⟨ w i , z ⟩ ) ReLU ( ⟨ w j , z ⟩ ) ] . |
NIPS_2016_279 | NIPS_2016 | Weakness: 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world. 2. Though the IIM problem is defined on the Ising network model, all the analysis is based on the mean-field approximation. Therefore, it will be great if the authors can carry out experiments to show how similar is the mean-field approximation compared to the true distribution via methods such as Gibbs sampling. Detailed Comments: 1. Section 3, Paragraph 1, Line 2, if there there exists -> if there exists. | 2. Though the IIM problem is defined on the Ising network model, all the analysis is based on the mean-field approximation. Therefore, it will be great if the authors can carry out experiments to show how similar is the mean-field approximation compared to the true distribution via methods such as Gibbs sampling. Detailed Comments: |
NIPS_2018_963 | NIPS_2018 | and Clarifications]: - My major concerns are with the experimental setup: (a) The paper bears a similarity in various implementation details to Pappert et.al. [5] (e.g. adaptive scaling etc.), but it chose to compare with the noisy network paper [8]. I understand [5] and [8] are very similar, but the comparison to [5] is preferred, especially because of details like adaptive scaling etc. (b) The labels in Figure-5 mention that DDPG w/ parameter noise: is this method from Plappert et.al. [5] or Fortunato et.al. [8]. It is unclear. (c) No citations are present for the names of baseline methods in the Section-4.3 and 4.4. It makes it very hard to understand which method is being compared to, and the reader has to really dig it out. (d) Again in Figure-5, what is "DDPG(OU noise)"? I am guessing its vanilla DDPG. Hence, I am surprised as to why is "DDPG (w/ parameter space noise)" is performing so much worse than vanilla DDPG? This makes me feel that there might be a potential issue with the baseline implementation. It would be great if the authors could share their perspective on this. (e) I myself compared the plots from Figure-1,2,3, in Pappert et.al. [5] to the plots in Figure-5 in this paper. It seems that DDPG (w/ parameter space noise) is performing quite worse than their TRPO+noise implementation. Their TRPO+noise beats the vanilla TRPO, but DDPG+noise seems to be worse than DDPG itself. Please clarify the setup. (f) Most of the experiments in Figure-4 seems to be not working at all with A2C. It would be great if authors could share their insight. - On the conceptual note: In the paper, the proposed approach of encouraging diversity of policy has been linked to "novelty search" literature from genetic programming. However, I think that taking bonus as KL-divergence of current policy and past policy is much closer to perturbing policy with a parameter space noise. Both the methods encourage the change in policy function itself, rather than changing the output of policy. I think this point is crucial to the understanding of the proposed bonus formulation and should be properly discussed. - Typo in Line-95 [Final Recommendation]: I request the authors to address the clarifications and comments raised above. My current rating is marginally below the acceptance threshold, but my final rating will depend heavily on the rebuttal and the clarifications for the above questions. [Post Rebuttal] Reviewers have provided a good rebuttal. However, the paper still needs a lot of work in the final version. I have updated my rating. | - On the conceptual note: In the paper, the proposed approach of encouraging diversity of policy has been linked to "novelty search" literature from genetic programming. However, I think that taking bonus as KL-divergence of current policy and past policy is much closer to perturbing policy with a parameter space noise. Both the methods encourage the change in policy function itself, rather than changing the output of policy. I think this point is crucial to the understanding of the proposed bonus formulation and should be properly discussed. |
NIPS_2022_807 | NIPS_2022 | . The technical contribution of this work is limited. The algorithms presented and their analysis in my understanding is mainly lifted from the existing literature. The Active Reward Learning guarantees are not really novel or surprising. The reward free section lacks citations but these results are already present in the literature.
Despite 2) I think the paper has merit because of the introduction of this setup. | 2) I think the paper has merit because of the introduction of this setup. |
NIPS_2020_1504 | NIPS_2020 | * In the context of posterior inference (i.e. p corresponds to a posterior), the stochastic Stein discrepancy investigated here is technically different from the common practice: here separate mini-batches of data points are drawn for each sample point, while in practice we usually use a fixed mini-batch for all sample points. The version discussed here is more difficult to implement. * I have concerns about the correctness of the proof; see below. | * I have concerns about the correctness of the proof; see below. |
BCRZq5nNZu | ICLR_2024 | - Originality
- Contributions are incremental and novelty is limited.
- Quality
- [Page-1, Section-1, Para-3] When a model sequentially learns over a sequence of datasets (in this case chunks) without having the measures of retaining the past knowledge it tends to forget the past learning as evident in the CL literature in both cases of homogeneous and heterogeneous data (chunk) distributions. Therefore, it is expected that the model performance will drop in such cases, hence the second claim is a valid expectation in such a setting and does not constitute a significant outcome of the analysis.
- In practical scenarios where task boundaries are not pre-known or specified, per-chunk weight averaging could easily worsen the model performance as averaging is done without consideration of the current chunk's domain/distribution as compared to the past chunks.
- Per-chunk weight averaging can be seen as the simplest form of knowledge aggregation technique in a continual learning setting. Hence, it reduces forgetting in the ideal (class-balanced) scenario which is the usual expectation from such techniques and does not constitute high significance as more sophisticated methods have already been developed in CL literature like weight regularization, data replay and incorporating additional parameters.
- The chunking setting described in the papers is ideal and data for such settings is also generated under simplified and impractical assumptions which will not scale to the online settings with changing data distributions in real-world scenarios.
- I am keen to hear the response of the authors on this and hope that they can change my point of view.
- Clarity
- It is difficult to keep track of the different data preparation techniques for "offline SGD", "standard CL" and "chunking" methods. It would be better to have clear algorithms and/or pictorial illustrations for the same.
- [Page-2, Section-2, Para-2] Please elaborate on the classification problems being referred to here.
- [Figure-2] The Font size is too small, please increase it and also add an explanation of the models shown in the legend (Move them from Appendix A to the Figure caption).
- Inconsistent use of terminologies. Please clarify the following terminologies (maybe in a tabular format) so that the reader can refer to them whenever required:
- "offline learning"
- "plain SGD learning"
- "full CL setting"
- "standard CL"
- "online CL"
- Significance
- [Table-2] As "per-chunk weight averaging" strategy involves updating weight parameters. It would make sense to compare it with the existing "weight regularization" based CL strategies like the below methods:
- [EWC] -> Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the national academy of sciences 114, no. 13 (2017): 3521-3526.
- [SI] -> Zenke, Friedemann, Ben Poole, and Surya Ganguli. "Continual learning through synaptic intelligence." In International conference on machine learning, pp. 3987-3995. PMLR, 2017.
- Typographical errors:
- [Page-1, Section-1, Para-1] "thwart" -> "thwarted", "focuses CL" -> "focuses of CL"
- [Page-2, Section-2, Para-2] "called called" -> "called" | - Clarity - It is difficult to keep track of the different data preparation techniques for "offline SGD", "standard CL" and "chunking" methods. It would be better to have clear algorithms and/or pictorial illustrations for the same. |
ICLR_2022_518 | ICLR_2022 | 1. I am not sure if the experiments would be very appealing to the deep learning community because they do not compare with the many latest temporal network SOTAs, and the two tasks seem to mostly target audience from robotics and control. Technically speaking, it mainly uses temporal network dataset whose features have strong continuous-time dynamics, and only for node-level prediction task). To appeal to more audience from the ICLR community, I would suggest that the experiments be significantly extended by referencing and comparing with the following papers in recent years: [1, 2, 3, 4, 5]. Alternatively, the authors could be more explicit in restricting ST-GNN's application domain. 2. Despite being well-principled and neat, the proposed space-time convolution (Equation 7) does not seem quite new to me. It seems essentially just a temporal convolution followed by graph diffusion which is not totally new ([6] for example adopted a very similar idea).
[1] Inductive Representation Learning on Temporal Graphs [2] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks [3] EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs [4] Variational Graph Recurrent Neural Networks [5] JODIE: Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks [6] TEDIC: Neural Modeling of Behavioral Patterns in Dynamic Social Interaction Networks | 7) does not seem quite new to me. It seems essentially just a temporal convolution followed by graph diffusion which is not totally new ([6] for example adopted a very similar idea). [1] Inductive Representation Learning on Temporal Graphs [2] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks [3] EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs [4] Variational Graph Recurrent Neural Networks [5] JODIE: Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks [6] TEDIC: Neural Modeling of Behavioral Patterns in Dynamic Social Interaction Networks |
NIPS_2018_539 | NIPS_2018 | Weakness: - Although authors try to explain the STABLE model in a Bayesian framework using variational lower bound, as we see in the final form of the algorithm, it doesn't seem different from a simple regularization method illustrated in Section 3.2.1 when discriminator in Figure 3 corresponds to L2 distance. - The experimental protocol seems favorable to the proposed method and unfair to previous methods such as loss correction or S-adaptation. For example, one may inject noise transition prior on S-adaptation by setting corresponding \theta values to zero and renormalize after each update. - In addition, as illustrated in 3.2.1, one can easily include regularization to train noise transition matrix to follow the prior. It doesn't seem like a valid excuse to exclude regularization method from comparison due to the difficulty of experimentation when the proposed method may also involve some hyperparameters that balances training between reconstructor, generator and discriminator. - Experimental validation is not satisfactory as it is only evaluated with artificially generated noise transition constraints. | - The experimental protocol seems favorable to the proposed method and unfair to previous methods such as loss correction or S-adaptation. For example, one may inject noise transition prior on S-adaptation by setting corresponding \theta values to zero and renormalize after each update. |
ICLR_2023_2235 | ICLR_2023 | 1. In section 3.1 they do not specify what certain notations mean , eg the difference between the two transaction tables on the right of figure 2.
2. Jump from section 3.2 to 3.3 is big especially for people who are unfamiliar with algorithms they point to such as FP-growth Han et al. (2000) and apriori Agrawal et al. (1994). They use an example for section 3.1 but then they drop the example for subsequent sections in the algortihm .
3. Other evaluation metrics employed by other papers eg, fidelity to the model and comprehensibility could have been explored . Human evaluations might make a more compelling case .
4. They don’t perform any study about which semantic features help and which harm the f1 score.
5. Visualizaition is an important part of explainable models which this paper lacks | 2. Jump from section 3.2 to 3.3 is big especially for people who are unfamiliar with algorithms they point to such as FP-growth Han et al. (2000) and apriori Agrawal et al. (1994). They use an example for section 3.1 but then they drop the example for subsequent sections in the algortihm . |
x31F1VmiV7 | ICLR_2024 | - the rationality of retriever: 1). this paper claims that the retriever generates diverse sensitive words, while in the implementation, it actually samples words from 50 candidates, which reside in a very limited search space. 2). the approximation of the attacker is coarse, since the retriever only returns top-3 words from 50 candidates for the following inference of large language model, while in the realistic scenarios, the attacker could apply prompt dilution [1], misspelled or malformed inputs, or any other strategies to evade the safety filter.
- the upper bound of the whole pipeline depends on the knowledge of the selected open-source image and text filter. Since the image or textual filters of the commercial APIs such as DALLE-3 are not accessible, the transferability of the outputs obtained by their pipeline remains a question.
- the fairness of evaluation metric: this paper includes the proportion of their generated images rejected by the image filter as the part of attack success rate (ASR). However, in the real case, commercial APIs would not return those images if their image filter recognizes its NSFW content. So, including such metric in ASR may by not fair.
- the experiment results show no great advantage over human instruction attack, TLA. in Table 1, the toxic rate of TLA is lower than BSPA, which means the output of BSPA is less stealthy than TLA. As for ASR_hum, BSPA is 3.8\%, gaining little advantage than 1.66\% of TLA. Moreover, in Table 3, with their proposed RSF filter, the explicit attack achieves 3.6\% better performance than BSPA, which raises the doubt of the generality of the RSF filter.
[1] Javier Rando, Daniel Paleka, David Lindner, Lennard Heim, and Florian Trame`r. Red-teaming the stable diffusion safety filter. In NeurIPS ML Safety Workshop, 2022. | 1). this paper claims that the retriever generates diverse sensitive words, while in the implementation, it actually samples words from 50 candidates, which reside in a very limited search space. |
ICLR_2022_1454 | ICLR_2022 | 1: The contribution of SGDEM over AEGD is limited. Although theoretical analysis is provided to verify the effectiveness of the proposed algorithm, the advantages of SGDEM over the AEGD are unclear. As an improved version of AEGD, I believe detailed comparisons of theoretical results between these two methods are required.
2: The motivation to incorporate the momentum mechanism is straightforward since the momentum is widely used for optimization methods such as the popular SGD with momentum. However, the relation between the energy and the momentum is unclear. If it is just a combination of the known energy method in AEGD and the momentum method in SGD with momentum, the idea is not well-motivated.
3: The experimental results are weak. For most of the experiments, the proposed method performs worse than baseline AEGD even with the existence of oscillation.
4: The experiments are only conducted for vision tasks while NLP is a very important application in deep learning. The optimization method should also be essentially tested for NLP tasks. | 4: The experiments are only conducted for vision tasks while NLP is a very important application in deep learning. The optimization method should also be essentially tested for NLP tasks. |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | 2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2. |
ACL_2017_779_review | ACL_2017 | However, there are many points that need to be address before this paper is ready for publication.
1) Crucial information is missing Can you flesh out more clearly how training and decoding happen in your training framework? I found out that the equations do not completely describe the approach. It might be useful to use a couple of examples to make your approach clearer.
Also, how is the montecarlo sampling done? 2) Organization The paper is not very well organized. For example, results are broken into several subsections, while they’d better be presented together. The organization of the tables is very confusing. Table 7 is referred before table 6. This made it difficult to read the results.
3) Inconclusive results: After reading the results section, it’s difficult to draw conclusions when, as the authors point out in their comparisons, this can be explained by the total size of the corpus involved in their methods (621 ). 4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space. - General Discussion: Other: 578: We observe that word-level models tend to have lower valid loss compared with sentence- level methods….
Is it valid to compare the loss from two different loss functions?
Sec 3.2, the notations are not clear. What does script(Y) means?
How do we get p(y|x)? this is never explained Eq 7 deserves some explanation, or better removed.
320: What approach did you use? You should talk about that here 392 : Do you mean 2016?
Nitty-gritty: 742 : import => important 772 : inline citation style 778: can significantly outperform 275: Assumption 2 needs to be rewritten … a target sentence y from x should be close to that from its counterpart z. | 2) Organization The paper is not very well organized. For example, results are broken into several subsections, while they’d better be presented together. The organization of the tables is very confusing. Table 7 is referred before table 6. This made it difficult to read the results. |
MVosmEvLSb | ICLR_2025 | 1) The problem is not well motivated with respect to utility for the ML community
2) The paper seems limited to analyzing two methods proposed by authors for a new problem. The authors should mention their contributions in comparison to existing literature more clearly.
3) Assuming no error in $X_S$ seems to be a very restrictive assumption.
4) The assumption in Eq 13 made in Lemma 1 seems theoretically motivated to prove the lemma and is not justified/motivated from a practitioner's perspective. I have some comments for Eq 14 in Theorem 1 and Eq 17 in Theorem 2.
5) For example, kindly see the mutual incoherence assumption made in the standard support recovery paper: [https://ieeexplore.ieee.org/document/4839045](https://ieeexplore.ieee.org/document/4839045). One can get some intuition about the problem from such an assumption on mutual incoherence but not from the assumptions made in the submission.
6) The authors discuss support recovery in theory but use the F1 score in their experiments. It would be easier to verify the theoretical claims if the authors directly compute the empirical support recovery probability. For example, please refer to experiments of [https://ieeexplore.ieee.org/document/4839045](https://ieeexplore.ieee.org/document/4839045).
Minor comments
1) Section 3.1 seems to be slightly verbose. I think it can be put in the Appendix.
2) The authors should provide reference for Wedin’s theorem in line 368
3) Theorem 1 and Theorem 2 seem to be a corollary of Lemma 1 and not novel theorems. Typos:
1. Line 190,192, 357: Equation is typed twice
2. Various typos in Eq 13. 2-> $\inf$ should be in subscript for the norm?
3. Line 343: Problem in equation 5
4. Line 397: I think the authors mean the right-hand side instead of the left-hand side.
5. There are also some grammatical errors in the submission. | 5) For example, kindly see the mutual incoherence assumption made in the standard support recovery paper: [https://ieeexplore.ieee.org/document/4839045](https://ieeexplore.ieee.org/document/4839045). One can get some intuition about the problem from such an assumption on mutual incoherence but not from the assumptions made in the submission. |
ACL_2017_727_review | ACL_2017 | Quantitative results are given only for the author's PSL model and not compared against any traditional baseline classification algorithms, making it unclear to what degree their model is necessary. Poor comparison with alternative approaches makes it difficult to know what to take away from the paper.
The qualitative investigation is interesting, but the chosen visualizations are difficult to make sense of and add little to the discussion. Perhaps it would make sense to collapse across individual politicians to create a clearer visual.
- General Discussion: The submission is well written and covers a topic which may be of interest to the ACL community. At the same time, it lacks proper quantitative baselines for comparison. Minor comments: - line 82: A year should be provided for the Boydstun et al. citation - It’s unclear to me why similar behavior (time of tweeting) should necessarily be indicative of similar framing and no citation was given to support this assumption in the model.
- The related work goes over quite a number of areas, but glosses over the work most clearly related (e.g. PSL models and political discourse work) while spending too much time mentioning work that is only tangential (e.g. unsupervised models using Twitter data).
- Section 4.2 it is unclear whether Word2Vec was trained on their dataset or if they used pre-trained embeddings.
- The authors give no intuition behind why unigrams are used to predict frames, while bigrams/trigrams are used to predict party.
- The authors note that temporal similarity worked best with one hour chunks, but make no mention of how important this assumption is to their results. If the authors are unable to provide full results for this work, it would still be worthwhile to give the reader a sense of what performance would look like if the time window were widened.
- Table 4: Caption should make it clear these are F1 scores as well as clarifying how the F1 score is weighted (e.g. micro/macro). This should also be made clear in the “evaluation metrics” section on page 6. | - The authors give no intuition behind why unigrams are used to predict frames, while bigrams/trigrams are used to predict party. |
VSBBOEUcmD | EMNLP_2023 | - Since the proposed LLM-based approach aims to benefit from low-resource target domains, it is necessary to evaluate how the size of available target text affects performance. However, the paper only investigates one fixed amount of target text, leaving important questions unanswered. For example, does the amount of available target text affect the diversity of the generated text and impact the effectiveness of the proposed approach? Would this approach be more effective if you further reduce the amount of target text?
- The paper emphasizes using grammar rules as prompts for LLM to generate target-domain text, but no experiments were conducted to evaluate the benefits it brings.
- The writing could be improved to provide more implementation details. For instance, the hyperparameters used for constructing LLM prompts in section 3.5 are not detailed. The implementation details of the instance selection criteria are not provided, such as the confidence measure and how it is combined with the grammar-rule-based selection approach. | - Since the proposed LLM-based approach aims to benefit from low-resource target domains, it is necessary to evaluate how the size of available target text affects performance. However, the paper only investigates one fixed amount of target text, leaving important questions unanswered. For example, does the amount of available target text affect the diversity of the generated text and impact the effectiveness of the proposed approach? Would this approach be more effective if you further reduce the amount of target text? |
NIPS_2021_1947 | NIPS_2021 | Writing the paper and linking and describing sections need improvement.
Lat, Lng and distances are based on certain geometric models of earth as earth's shape is Geoid. Google Maps uses one, Satellite, Navy, Flights others. What system is adhered here is not clear and have not been thought of it seems. What happens if POI spans across large regions and small regions?
Why can't this problem be modeled as a recommender problem? Justify.
How are timestamps validated across time-zones? Are they all in GMT or offset taken care of?
Analysis is missing from Experimental section. Correctness:
Line 51 - "For example, people are more likely to visit Food POIs at lunch time than at other times. " - should not this be temporal context?
Line 58 - "manually designed functions" - usually human designed functions wrt semantics are better than those learnt by automation? Defend.
Line 184 - something works here do not mean it will work somewhere. An easy example to relate - deep learning works well for images, fails miserably for signals.
Line 216 - different recommender problems have their own inherent characteristic. Like music has genre and shopping has market basket influence.
Line 246 - extra softmax layer not clear.
Line 293: why Adam optimizer?
Effort: Effort has been made more in writing the paper more than that put in coding and thought process (philosophy). Reverse was expected.
Suggestions: 0. Give an example to illustrate the problem.
This is a technical paper venue. Your abstract should contain numbers instead of promising results. Directly jump to the point and results and the philosophy. Elaboration can be done in Introduction.
The long sentences should be broken into small sentences for clarity, continuity and smooth reading. May be another round of proof reading will enhance the grammatical and contextual use of words.
It is recommended to give a diagram in 'Introduction' section to make sense of the problem statement and application at a glimpse.
May be related work gaps can be enlisted in a table and what extra / new has been done be highlighted, focusing on whether new work is at all needed or just a feature nice to have?
Overcome repetitions - there exists many text not needed again - focus more on the technical content.
Make contribution section crisp. You have introduced novel method - so what? Why others will use yours instead of their own? Highlight this in your preferred way.
De-clutter useless information - Line 130 - For example, in computer vision, it has 131 been used for semantic segmentation [25, 5], crowd counting [45], depth completion [27], and visual localisation [34]. - wastage of references and text area.
Follow Train-Eval-Split following Andrew Ng's ML Yearning book.
How does demographics and world events bias the POI prediction? A scope of future work.
There is no negative impact per se, apart from privacy issue of mobility data that may not be fully anonymized following k-anonymity logic. In terms of current pandemic scenario, this work may find some value in application across contact tracing and crowd management - hence has a social impact side. | - wastage of references and text area. Follow Train-Eval-Split following Andrew Ng's ML Yearning book. How does demographics and world events bias the POI prediction? A scope of future work. There is no negative impact per se, apart from privacy issue of mobility data that may not be fully anonymized following k-anonymity logic. In terms of current pandemic scenario, this work may find some value in application across contact tracing and crowd management - hence has a social impact side. |
ICLR_2022_1212 | ICLR_2022 | 1) The experimental settings are not hard enough to evaluate the performance of FSL. There is no doubt that there is information loss when the devices transmit only the ranking of scores. This kind of information loss is not serious when the rankings on different devices are similar (the local subnetwork structures are similar due to similar data distributions). In this paper, the user uses Dirichlet distributions to construct non-IID data for MNIST and CIFAR10. Even though the data distributions are different across devices, each device still holds all classes of data and local subnetwork structures would not show significant difference. And I guess the robustness results have the same problem since the authors use a voting mechanism to update the global ranking. I am wondering whether FSL would perform well under the non-IID setting in FedAvg paper, where each client only has two classes of data rather than all classes of data. 2) The improvement over baselines is not significant on some dataset. For example, “Top-K 10%” achieves even higher accuracy than “FSL” with lower communication cost on FEMNIST dataset. 3) The idea of utilizing “supermask” seems novel, but this paper seems just simply combining “supermask” with FL. It is okey to do “A plus B” things, but you need to provide some scientific contributions like providing a theoretical analysis about why “supermask plus FL” works, and what challenges that you solved make it deserve an acceptance by a top avenue like ICLR. | 3) The idea of utilizing “supermask” seems novel, but this paper seems just simply combining “supermask” with FL. It is okey to do “A plus B” things, but you need to provide some scientific contributions like providing a theoretical analysis about why “supermask plus FL” works, and what challenges that you solved make it deserve an acceptance by a top avenue like ICLR. |
NIPS_2020_790 | NIPS_2020 | Generally speaking, the paper is interesting with solid theoretical results. I have few questions: * In the analysis of the CRS model, the full CRS is considered. However, the factorized CRS is considered for the empirical results. Do the derived theoretical guarantees follow for the factorized CRS? Is it possible to simulate both versions of CRS to visualize the performance gap, if any? * In the empirical results section, It is evident that the performance gap between PL and CRS is much lower than the one between CRS and Mallows across most of the databases. Are there any insights about such a behavior? | * In the empirical results section, It is evident that the performance gap between PL and CRS is much lower than the one between CRS and Mallows across most of the databases. Are there any insights about such a behavior? |
NIPS_2019_932 | NIPS_2019 | weakness is that some of the main results come across as rather simple combinations of existing ideas/results, but on the other hand the simplicity can also be viewed as a strength. I donât find the Experiments section essential, and would have been equally happy to have this as a purely theory paper. But the experiments donât hurt either. My remaining comments are mostly quite minor â I will put a * next to those where I prefer a response, and any other responses are optional: [*] p2: Please justify the claim âoptimal number of measurementsâ - in particular highlighting the k*log(n/k) + 1/eps lower bound from [1] and adding it to Table 1. As far as I know, it is an open problem as to whether the k^{3/2} term is unavoidable in the binary setting - is this correct? (If not, again please include a citation and add to Table 1) - p2: epsilon is used without being defined (and also the phrase âapproximate recoveryâ) - p4: Avoid the uses of the word ânecessaryâ, since these are only sufficient conditions. Similarly, in Lemma 3 the statement âprovided thatâ is strictly speaking incorrect (e.g., m = 0 satisfies the statement given). - The proof of Lemma 1 is a bit confusing, and could be re-worded. - p6: The terminology ârateâ, ârelative distanceâ, and notation H_q(delta) should not be assumed familiar for a NeurIPS audience. - I think the proof of Theorem 10 should be revised. Please give brief explanations for the steps (e.g., the step after qd = (â¦) follows by re-arranging the choice of n, etc.) [*] In fact, I couldnât quite follow the last step â substituting q=O(k/alpha) is clear, but why is the denominator also proportional to alpha/k? (A definition of H_q would have helped here) - Lemma 12: Please emphasize that m is known but x is not â this seems crucial. - For the authorsâ interest, there are some more recent refined bounds on the âfor-eachâ setting such as âLimits on Support Recovery with Probabilistic Models: An Information-Theoretic Frameworkâ and âSparse Classification: A Scalable Discrete Optimization Perspectiveâ, though since the emphasis of this paper is on the âfor-allâ setting, mentioning these is not essential. Very minor comments: - No need for capitalization in âGroup Testingâ - Give a citation when group testing first mentioned on p3 - p3: Remove the word âtypicalâ from âthe typical group testing measurementâ, I think it only increases ambiguity/confusion. - Lemma 1: Is â\cdotâ an inner product? Please make it clear. Also, should it be mx or m^T x inside the sign(.)? - Theorem 8: Rename delta to beta to avoid inconsistency with delta in Theorem 7. Also, is a âfor all dâ statement needed? - Just before Section 4.2, perhaps re-iterate that the constructions for [1] were non-explicit (hence highlighting the value of Theorem 10). - p7: âvery low probabilityâ -> âzero probabilityâ - p7: âThis connection was known previouslyâ -> Add citation - p10: Please give a citation for Pr[sign = sign] = (⦠cos^-1 formula â¦). === POST-REVIEW COMMENTS: The responses were all as I had assumed them to be when stating my previous score, so naturally my score is unchanged. Overall a good paper, with the main limitation probably being the level of novelty. | - No need for capitalization in âGroup Testingâ - Give a citation when group testing first mentioned on p3 - p3: Remove the word âtypicalâ from âthe typical group testing measurementâ, I think it only increases ambiguity/confusion. |
NIPS_2019_1207 | NIPS_2019 | - Moderate novelty. This paper combines various components proposed in previous work (some of it, it seems, unbeknownst to the authors - see Comment 1): hierarchical/structured optimal transport distances, Wasserstein-Procrustes methods, sample complexity results for Wasserstein/Sinkhorn objectives. Thus, I see the contributions of this paper being essentially: putting together these pieces and solving them cleverly via ADMM. - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: 1 Related Work. My main concern with this paper is its apparent lack of awareness of two very related lines of work. On the one hand, the idea of defining hierarchical OT distances has been explored before in various contexts (e.g., [5], [6] and [7]), and so has leveraging cluster information for structured losses, e.g. [9] and [10] (note that latter of these relies on an ADMM approach too). On the other hand, combining OT with Procrustes alignment has a long history too (e.g, [1]), with recent successful application in high-dimensional problems ([2], [3], [4]). All of these papers solve some version of Eq (4) with orthogonality (or more general constraints), leading to algorithms whose core is identical to Algorithm 1. Given that this paper sits at the intersection of two rich lines of work in the OT literature, I would have expected some effort to contrast their approach, both theoretically and empirically, with all these related methods. 2. Baselines. Related to the point above, any method that does not account for rotations across data domains (e.g., classic Wasserstein distance) is inadequate as a baseline. Comparing to any of the methods [1]-[4] would have been much more informative. In addition, none of the baselines models group structure, which again, would have been easy to remedy by including at least one alternative that does (e.g., [10] or the method of Courty et al, which is cited and mentioned in passing, but not compared against). As for the neuron application, I am not familiar with the DAD method, but the same applies about the lack of comparison to OT-based methods with structure/Procrustes invariance. 3. Conflation of geometric invariance and hierarchical components. Given that this approach combines two independent extensions on the classic OT problem (namely, the hierarchical formulation and the aligment over the stiefel manifold), I would like to understand how important these two are for the applications explored in this work. Yet, no ablation results are provided. A starting point would be to solve the same problem but fixing the transformation T to be the identity, which would provide a lower bound that, when compared against the classic WA, would neatly show the advantage of the hierarchical vs a "flat" classic OT versions of the problem. 4. No runtime results. Since computational efficiency is one of the major contributions touted in the abstract and introduction, I was expecting to see at least empirical and/or a formal convergence/runtime complexity analysis, but neither of these was provided. Since the toy example is relatively small, and no details about the neural population task are provided, the reader is left to wonder about the practical applicability of this framework for real applications. Minor Comments/Typos: - L53. *the* data. - L147. It's not clear to me why (1) is referred to as an update step here. Wrong eqref? - Please provide details (size, dimensionality, interpretation) about the neural population datasets, at least on the supplement. Many readers will not be familiar with it. References: * OT-based methods to align in the presence of unitary transformations: [1] Rangarajan et al, "The Softassign Procrustes Matching Algorithm", 1997. [2] Zhang et al, "Earth Moverâs Distance Minimization for Unsupervised Bilingual Lexicon Induction", 2017. [3] Alvarez-Melis et al, "Towards Optimal Transport with Global Invariances", 2019. [4] Grave et al, "Unsupervised Alignment of Embeddings with Wasserstein Procrustes", 2019. *Hierarchical OT methods: [5] Yuorochkin et al, "Hierarhical Optimal Transport for Document Representation". [6] Shmitzer and Schnorr, "A Hierarchical Approach to Optimal Transport", 2013 [7] Dukler et al, "Wasserstein of Wasserstein Loss for Learning Generative Models", 2019 [9] Alvarez-Melis et al, "Structured Optimal Transport", 2018 [10] Das and Lee, "Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching", 2018 | - Please provide details (size, dimensionality, interpretation) about the neural population datasets, at least on the supplement. Many readers will not be familiar with it. References: |
NIPS_2017_575 | NIPS_2017 | - While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this.
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
- The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically.
Minor comments:
- Are you using dropout on the source/target embeddings?
- Line 146: There seems to be dangling "2" | - The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically. Minor comments: |
cMMxJxzYkZ | EMNLP_2023 | - Some of the results tables claim that statistical significance testing has taken place, but it’s unclear which things are being compared in the statistical tests. This seems important to clarify since some of the differences are a bit smaller.
- This paper is empirically significant in establishing a new SOTA on this benchmark, but the technical novelty of the approach is more limited. It’s not proposing a new modelling approach, and the ideas for improving over chatgpt (e.g. two stage reasoning with emotion prediction, using a knowledge base, etc) are similar to prior approaches that have been successful in improving smaller models on this dataset. | - This paper is empirically significant in establishing a new SOTA on this benchmark, but the technical novelty of the approach is more limited. It’s not proposing a new modelling approach, and the ideas for improving over chatgpt (e.g. two stage reasoning with emotion prediction, using a knowledge base, etc) are similar to prior approaches that have been successful in improving smaller models on this dataset. |
NIPS_2020_1605 | NIPS_2020 | 1) Discrete regression outputs: The output space for the final regression model is *discrete*, and this may not be entirely desirable in applications where the regression function is expected to make continuous predictions. I understand that prior works on fair regression also discretize the regression problem to turn it into a classification problem, but this could be a limitation in practice. 2) Feasibility result doesn't take errors in multipliers into account: There is a gap in the theoretical results presented. The main optimality and feasibility results in Sec 3.1 assume that the Lagrange multipliers \lambda's are exactly *optimal*. However, the multipliers output by Algo 1 achieve a dual objective that is "\epsilon-close" to the optimal dual objective. Are your feasibility and optimality results robust to errors in the multipliers? In particular, your proofs suggest that you would need the multipliers to have a near-zero dual gradient, whereas you guarantee that the multipliers output by Algo 1 are close-to-optimal in objective value. 3) Can you handle the more common case of inequality constraints with slack? The current definition of a "fair predictor" (Defn 2.1) insists that the output distribution for each sensitive attribute be equal, and as a result the authors end up with "equality" fairness constraints in their formulation. However, in practice, its more reasonable to allow for a slack in the fairness constraints (so that one can better control the trade-off with the MSE objective) and turn them into inequality constraints. How important is it that the fairness constraints in your formulation be equality constraints? With inequality constraints, you would additionally need the Lagrange multipliers to be non-negative, and as a result may not be able to simply equate the gradient of the dual objective to 0 to show feasibility. Moreover, the accelerated GD algorithm would need to explicitly impose non-negativity constraints on the multipliers (e.g. using a prox term). Overall, the paper's strengths outweigh the flaws. I'll be happy to adjust my scores based on the authors' response to question (2) in particular. Finally, while the authors already consider a number of baselines, for whatever its worth, here's one more post-processing baseline that I've found to work very well in practice for regression problems. [3] Jiang et al. "Wasserstein Fair Classification", UAI 2019. http://auai.org/uai2019/proceedings/papers/315.pdf While the problem setup here is that of class probability estimation, with a [0,1] continuous output space, the post-processing algorithm in Algo 2 in their paper is usually very effective for imposing demographic parity constraints even for general real-valued regression outputs. Instead of equalizing the output distribution between two groups, this paper aims to match the output distribution for the each group with the fixed distribution of labels P(Y = \ell) in the data. While this is a more restrictive constraint, it allows for a simpler quantile-based post-processing step. Minor comments: - In the experimental comparisons, how do you set the slack \epsilon for the constraints when you ran the baseline methods (e.g. Agarwal et al.)? Unless you fix the same slack on the KS violation for all the methods, it becomes hard to perform a fair comparison. - You currently report the constraint violation (KS metric) on the test set. How does this metric look like for your method on the train set -- in the absence of generalization error, to what extent does the discretization done in Sec 2.2. effect the violation in constraints? - In the proof of Prop 2.6, is it obvious why strong duality holds for the min-max problem despite the indicator function line 522? In particular, when you say it is easy to see that R(˜gλ∗ ) = R(g*) (line 535), wouldn't this require interchanging the min and max (or at least a proof similar to Lemma C.2.) - In the discussion of VC theory in Appendix B and the results that follow, my understanding is that you turn the problem of predicting an output in a range \mathcal{C} \subset \mathbb{R} into a binary classification problem, and bound the VC-dimension of the regressor-turned-classifier. Please correct me if my understanding is wrong. Might help the reader if you provide some intuition early on in this section in plain text. | 1) Discrete regression outputs: The output space for the final regression model is *discrete*, and this may not be entirely desirable in applications where the regression function is expected to make continuous predictions. I understand that prior works on fair regression also discretize the regression problem to turn it into a classification problem, but this could be a limitation in practice. |
NIPS_2018_874 | NIPS_2018 | --- None of these weaknesses stand out as major and they are not ordered by importance. * Role of and relation to human judgement: Visual explanations are useless if humans do not interpret them correctly (see framework in [1]). This point is largely ignored by other saliency papers, but I would like to see it addressed (at least in brief) more often. What conclusions are humans supposed to make using these explanations? How can we be confident that users will draw correct conclusions and not incorrect ones? Do the proposed sanity checks help identify explanation methods which are more human friendly? Even if the answer to the last question is no, it would be useful to discuss. * Role of architectures: Section 5.3 addresses the concern that architectural priors could lead to meaningful explanations. I suggest toning down some of the bolder claims in the rest of the paper to allude to this section (e.g. "properties of the model" -> "model parameters"; l103). Hint at the nature of the independence when it is first introduced. Incomplete or incorrect claims: * l84: The explanation of GBP seems incorrect. Gradients are set to 0, not activations. Was the implementation correct? * l86-87: GradCAM uses the gradient of classification output w.r.t. feature map, not gradient of feature map w.r.t. input. Furthermore, the Guided GradCAM maps in figure 1 and throughout the paper appear incorrect. They look exactly (pixel for pixel) equivalent to the GBP maps directly to their left. This should not be the case (e.g., in the first column of figure 2 the GradCAM map assigns 0 weight to the top left corner, but somehow that corner is still non-0 for Guided GradCAM). The GradCAM maps look like they're correct. l194-196: These methods are only equivalent gradient * input in the case of piecewise linear activations. l125: Which rank correlation is used? Theoretical analysis and similarity to edge detector: * l33-34: The explanations are only somewhat similar to an edge detector, and differences could reflect model differences. Even if the same, they might result from a model which is more complex than an edge detector. This presentation should be a bit more careful. * The analysis of a conv layer is rather hand wavy. It is not clear to me that edges should appear in the produced saliency mask as claimed at l241. The evidence in figure 6 helps, but it is not completely convincing and the visualizations do not (strictly speaking) immitate an edge detector (e.g., look at the vegitation in front of the lighthouse). It would be useful to include a conv layer initialized with sobel filter and a canny edge detector in figure 6. Also, quantitative experimental results comparing an edge detector to the other visual explanations would help. Figure 14 makes me doubt this analysis more because many non-edge parts of the bird are included in the explanations. Although this work already provides a fairly large set of experiments there are some highly relevant experiments which weren't considered: * How much does this result rely on the particular (re)intialization method? Which initialization method was used? If it was different than the one used to train the model then what justifies the choice? * How do these explanations change with hyperparameters like choice of activation function (e.g., for non piecewise linear choices). How do LRP/DeepLIFT (for non piecewise linear activations) perform? * What if the layers are randomized in the other direction (from input to output)? Is it still the classifier layer that matters most? * The difference between gradient * input in Fig3C/Fig2 and Fig3A/E is striking. Point that out. * A figure and/or quantitative results for section 3.2 would be helpful. Just how similar are the results? Quality --- There are a lot of weaknesses above and some of them apply to the scientific quality of the work but I do not think any of them fundamentally undercut the main result. Clarity --- The paper was clear enough, though I point out some minor problems below. Minor presentation details: * l17: Incomplete citation: "[cite several saliency methods]" * l122/126: At first it says only the weights of a specific layer are randomized, next it says that weights from input to specific layer are randomized, and finally (from the figures and their captions) it says reinitialization occurs between logits and the indicated layer. * Are GBP and IG hiding under the input * gradient curve in Fig3A/E? * The presentation would be better if it presented the proposed approach as one metric (e.g., with a name), something other papers could cite and optimize for. * GradCAM is removed from some figures in the supplement and Gradient-VG is added without explanation. Originality --- A number of papers evaluate visual explanations but none have used this approach to my knowledge. Significance --- This paper could lead to better visual explanations. It's a good metric, but it only provides sanity checks and can't identify really good explanations, only bad ones. Optimizing for this metric would not get the community a lot farther than it is today, though it would probably help. In summary, this paper is a 7 because of novelty and potential impact. I wouldn't argue too strongly against rejection because of the experimental and presentation flaws pointed out above. If those were fixed I would argue strongly against rejection. [1]: Doshi-Velez, Finale and Been Kim. âA Roadmap for a Rigorous Science of Interpretability.â CoRR abs/1702.08608 (2017): n. pag. | * How do these explanations change with hyperparameters like choice of activation function (e.g., for non piecewise linear choices). How do LRP/DeepLIFT (for non piecewise linear activations) perform? |
ARR_2022_297_review | ARR_2022 | 1. While it is fair to say that two annotators might have different answers to the same question and both might be correct, it would be better to verify that the answers provided are all valid. The authors manually validate a small subset of the dataset but, for a high quality dataset, it would be better to validate all of it.
2. The paper should include automatic metrics for the generation task. While the metrics have their own problems, it would be a good way to compare systems without expensive human evaluation.
1. It would be good if you expand on the importance of the order of wh-words used for question generation.
2. Line 112: Consider the ‘second’ example actually refers to the first example in Figure 1.
3. I agree with the authors about the framing of the classification task, that it isn’t a realistic one. Maybe the paper would be better without it. | 2. The paper should include automatic metrics for the generation task. While the metrics have their own problems, it would be a good way to compare systems without expensive human evaluation. |
ARR_2022_209_review | ARR_2022 | - The proposed method heavily relies on BERT-based encoders and BERT has a word limit of 512 tokens. But most discharge summaries in MIMIC-III have much more than 512 tokens. This may mean a lot of information in discharge summaries is truncated and the model may not be able to build a comprehensive representation of patient condition.
- The reliability and interoperability of the proposed method are in doubt based on Figure 3 which shows a high percentage of unhelpful literature is retrieved especially for the LOS task. How will such unhelpful literature impact patient outcomes? How can this be improved?
- The performance on LOS is not convincing and the paper does not provide much insight on why.
- The experiments do not seem to consider structured features at all (e.g. 17 clinical features from [1] based on MIMIC-III) which however are critical for patient outcome prediction from both clinical and ML perspectives [2]. The experiments may need a baseline that leverages structured features to show the advantage of using clinical notes and interpret BEEP's performance.
[1] https://www.nature.com/articles/s41597-019-0103-9 [2] https://arxiv.org/abs/2107.11665
- In the abstract and experiment section, expressions like "5 points" are confusing. " 5% increase" or "0.05 increase" would be clearer.
- In the abstract, what is "increasing F1 by up to t points and precision @Top-K by a large margin of over 25%" based on? The paper may make the statement clearer by mentioning the specific setup that achieves the largest improvement margin.
- Based on Appendix B, the bi-encoder is trained on TREC 2016 with 30 EHRs. The paper may discuss how representative these 30 EHRs are to MIMIC-III EHRs. Also as 30 is rather small, the paper may discuss whether it is enough empirically.
- Line 559, the paper may discuss why the model does not perform well on LOS, why a high percentage of unhelpful literature are retrieved (even for correct predictions) and how such a high percentage of unhelpful literature impact the reliability of the model.
- The paper may discuss why use MeSH rather than other ontologies like UMLS, SNOMED, HPO etc.
- Are those 8 categories in Appendix I mutually exclusive? | - In the abstract, what is "increasing F1 by up to t points and precision @Top-K by a large margin of over 25%" based on? The paper may make the statement clearer by mentioning the specific setup that achieves the largest improvement margin. |
ICLR_2022_1926 | ICLR_2022 | 1. The empirical results may be only marginally significant. For example, in Table 2, the proposed method cannot surpass SOTA under several settings. Plus the current version only conducts experiments on bert-base-uncased. It would be helpful to validate the proposed method using at least one more pre-trained language model like RoBERTa. 2. Actually I like simple but effective methods. But given that the empirical results are only marginally significant, I am worried that the proposed method might be too simple.
Plus, some technical details are not clear to me. See my questions below. Questions:
Should the probability ratio in Eq 4 be inside the ∑
? Or shall we use w ′
inside the ∑ ?
For each minibatch, does the proposed method update all the embeddings of words in vocab or just update words present in the current batch?
The proposed method can help defend against backdoor attacks with only 1% of clean training data, while the SOTA method NAD needs more. I am wondering whether this is only because of the few-shot property of prompt, or it is credited to the proposed gradient broadcast.
What is the proposed soft template optimization for prompt? | 1. The empirical results may be only marginally significant. For example, in Table 2, the proposed method cannot surpass SOTA under several settings. Plus the current version only conducts experiments on bert-base-uncased. It would be helpful to validate the proposed method using at least one more pre-trained language model like RoBERTa. |
NIPS_2017_357 | NIPS_2017 | - the manuscript is mainly a continuation of previous work on OT-based DA
- while the derivations are different, the conceptual difference is previous work is limited
- theoretical results and derivations are w.r.t. the loss function used for learning (e.g.
hinge loss), which is typically just a surrogate, while the real performance measure would
be 0/1 loss. This also makes it hard to compare the bounds to previous work that used 0-1 loss
- the theorem assumes a form of probabilistic Lipschitzness, which is not explored well.
Previous discrepancy-based DA theory does not need Prob.Lipschitzness and is more flexible
in this respect.
- the proved bound (Theorem 3.1) is not uniform w.r.t. the labeling function $f$. Therefore,
it does not suffice as a justification for the proposed minimization procedure.
- the experimental results do not show much better results than previous OT-based DA methods
- as the proposed method is essentially a repeated application of the previous work, I would have
hoped to see real-data experiments exploring this. Currently, performance after different number
of alternating steps is reported only in the supplemental material on synthetic data.
- the supplemental material feels rushed in some places. E.g. in the proof of Theorem 3.1, the
first inequality on page 4 seems incorrect (as the integral is w.r.t. a signed measure, not a
prob.distr.). I believe the proof can be fixed, though, because the relation holds without
absolute values, and it's not necessary to introduce these in (3) anyway.
- In the same proof, Equations (7)/(8) seem identical to (9)/(10)
questions to the authors:
- please comment if the effect of multiple BCD on real data is similar to the synthetic case ***************************
I read the author response and I am still in favor of accepting the work. | - the proved bound (Theorem 3.1) is not uniform w.r.t. the labeling function $f$. Therefore, it does not suffice as a justification for the proposed minimization procedure. |
ARR_2022_343_review | ARR_2022 | 1) Questionable usefulness of experiments on small datasets - As the paper itself states in the beginning, a possible weakness of earlier works is that their experiments were conducted on small datasets. In such cases it is unclear whether conclusions also apply to current MT systems trained on large datasets.
- This criticism also applies to this paper under review, since many experiments are conducted using IWSLT data. I would like the paper to at least acknowledge this weakness.
- It is questionable whether the outcome of the IWSLT experiments can be used to sub-select experimental conditions to try on the larger WMT data sets.
2) The paper is too dismissive of MBR decoding, without much evidence - The paper implies that MBR is always worse than other decoding methods and that "beam search with length normalization is the best decoding algorithm".
- I would change the wording to be more careful about describing the MBR results. Sampling-based MBR decoding is very new in MT and has a lot of potential to be more optimized in the future. It is simply not well optimized yet. For instance, very recent published works such as https://arxiv.org/abs/2111.09388 show that MBR improves considerably if a learned metric like COMET is used as the utility function for MBR.
- I also take issue with this sentence: "Sampling from character-level models leads to very poor translation quality that in turn also influences the MBR decoding that leads to much worse results than beam search." I believe that the quality of sampling is not necessarily indicative of the ability of MBR to pick a good translation from a pool of samples.
1) Suggestion for the general literature review: - The only kind of work I would add are proposals to learn an ideal segmentation (instead of fixing the segmentation before starting the training). One paper I think would make sense is: https://www.cl.uni-heidelberg.de/~sokolov/pubs/kreutzer18learning.pdf (also has character-level MT in the title).
2) Questions: - How many samples are the "sample" results in Table 2 based on? I believe this could be a point estimate of just 1 random sample. It is expected that the quality of one sample will be low, but the variance will also be very high. It would be better to show the mean and standard deviation of many samples, or at least clarify how exactly this result is produced.
3) Figures and presentation: - I believe that the color scheme in tables is confusing at times. In Table 1, I think it is confusing that a deeper shade of red means better results. In Table 2 it is non-intuitive that the first row is CHRF and the second row is COMET - and the table is quite hard to read.
4) Typos: - "and thus relatively frequent occurrence of out-of-vocabulary tokens." - a word is missing here - "The model shrinks character sequences into less hidden states" -> "fewer hidden states" - "does not apply non-linearity" -> "does not apply a non-linearity" | 3) Figures and presentation:- I believe that the color scheme in tables is confusing at times. In Table 1, I think it is confusing that a deeper shade of red means better results. In Table 2 it is non-intuitive that the first row is CHRF and the second row is COMET - and the table is quite hard to read. |
ACL_2017_768_review | ACL_2017 | . First, the classification model used in this paper (concat + linear classifier) was shown to be inherently unable to learn relations in "Do Supervised Distributional Methods Really Learn Lexical Inference Relations?" ( Levy et al., 2015). Second, the paper makes superiority claims in the text that are simply not substantiated in the quantitative results. In addition, there are several clarity and experiment setup issues that give an overall feeling that the paper is still half-baked.
= Classification Model = Concatenating two word vectors as input for a linear classifier was mathematically proven to be incapable of learning a relation between words (Levy et al., 2015). What is the motivation behind using this model in the contextual setting?
While this handicap might be somewhat mitigated by adding similarity features, all these features are symmetric (including the Euclidean distance, since |L-R| = |R-L|). Why do we expect these features to detect entailment?
I am not convinced that this is a reasonable classification model for the task.
= Superiority Claims = The authors claim that their contextual representation is superior to context2vec. This is not evident from the paper, because: 1) The best result (F1) in both table 3 and table 4 (excluding PPDB features) is the 7th row. To my understanding, this variant does not use the proposed contextual representation; in fact, it uses the context2vec representation for the word type.
2) This experiment uses ready-made embeddings (GloVe) and parameters (context2vec) that were tuned on completely different datasets with very different sizes. Comparing the two is empirically flawed, and probably biased towards the method using GloVe (which was a trained on a much larger corpus).
In addition, it seems that the biggest boost in performance comes from adding similarity features and not from the proposed context representation. This is not discussed.
= Miscellaneous Comments = - I liked the WordNet dataset - using the example sentences is a nice trick.
- I don’t quite understand why the task of cross-lingual lexical entailment is interesting or even reasonable.
- Some basic baselines are really missing. Instead of the "random" baseline, how well does the "all true" baseline perform? What about the context-agnostic symmetric cosine similarity of the two target words?
- In general, the tables are very difficult to read. The caption should make the tables self-explanatory. Also, it is unclear what each variant means; perhaps a more precise description (in text) of each variant could help the reader understand?
- What are the PPDB-specific features? This is really unclear.
- I could not understand 8.1.
- Table 4 is overfull.
- In table 4, the F1 of "random" should be 0.25.
- Typo in line 462: should be "Table 3" = Author Response = Thank you for addressing my comments. Unfortunately, there are still some standing issues that prevent me from accepting this paper: - The problem I see with the base model is not that it is learning prototypical hypernyms, but that it's mathematically not able to learn a relation.
- It appears that we have a different reading of tables 3 and 4. Maybe this is a clarity issue, but it prevents me from understanding how the claim that contextual representations substantially improve performance is supported.
Furthermore, it seems like other factors (e.g. similarity features) have a greater effect. | - In table 4, the F1 of "random" should be 0.25. |
NIPS_2019_1408 | NIPS_2019 | - The paper is not that original given the amount of work in learning multimodal generative models: â For example, from the perspective of the model, the paper builds on top of the work by Wu and Goodman (2018) except that they learn a mixture of experts rather than a product of experts variational posterior. â In addition, from the perspective of the 4 desirable attributes for multimodal learning that the authors mention in the introduction, it seems very similar to the motivation in the paper by Tsai et al. Learning Factorized Multimodal Representations, ICLR 2019, which also proposed a multimodal factorized deep generative model that performs well for discriminative and generative tasks as well as in the presence of missing modalities. The authors should have cited and compared with this paper. ****************************Quality**************************** Strengths: - The experimental results are nice. The paper claims that their MMVAE modal fulfills all four criteria including (1) latent variables that decompose into shared and private subspaces, (2) be able to generate data across all modalities, (3) be able to generate data across individual modalities, and (4) improve discriminative performance in each modality by leveraging related data from other modalities. Let's look at each of these 4 in detail: â (1) Yes, their model does indeed learn factorized variables which can be shown by good conditional generation on MNIST+SVHN dataset. â (2) Yes, joint generation (which I assume to mean generation from a single modality) is performed on vision -> vision and language -> language for CUB, â (3) Yes, conditional generation can be performed on CUB via language -> vision and vice versa. Weaknesses: - (continuing on whether the model does indeed achieve the 4 properties that the authors describe) â (3 continued) However, it is unclear how significant the performance is for both 2) and 3) since the authors report no comparisons with existing generative models, even simple ones such as a conditional VAE from language to vision. In other words, what if I forgo with the complicated MoE VAE, and all the components of the proposed model, and simply use a conditional VAE from language to vision. There are many ablation studies that are missing from the paper especially since the model is so complicated. â (4) The authors have not seemed to perform extensive experiments for this criteria since they only report the performance of a simple linear classifier on top of the latent variables. There has been much work in learning discriminative models for multimodal data involving aligning or fusing language and vision spaces. Just to name a few involving language and vision: - Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, EMNLP 2016 - DeViSE: A Deep Visual-Semantic Embedding Model, NeurIPS 2013 Therefore, it is important to justify why I should use this MMVAE model when there is a lot of existing work on fusing multimodal data for prediction. ****************************Clarity**************************** Strengths: - The paper is generally clear. I particularly liked the introduction of the paper especially motivation Figures 1 and 2. Figure 2 is particularly informative given what we know about multimodal data and multimodal information. - The table in Figure 2 nicely summarizes some of the existing works in multimodal learning and whether they fulfill the 4 criteria that the authors have pointed out to be important. Weaknesses: - Given the authors' great job in setting up the paper via Figure 1, Figure 2, and the introduction, I was rather disappointed that section 2 did not continue on this clear flow. To begin, a model diagram/schematic at the beginning of section 2 would have helped a lot. Ideally, such a model diagram could closely resemble Figure 2 where you have already set up a nice 'Venn Diagram' of multimodal information. Given this, your model basically assigns latent variables to each of the information overlapping spaces as well as arrows (neural network layers) as the inference and generation path from the variables to observed data. Showing such a detailed model diagram in an 'expanded' or 'more detailed' version of Figure 2 would be extremely helpful in understanding the notation (which there are a lot), how MMVAE accomplishes all 4 properties, as well as the inference and generation paths in MMVAE. - Unfortunately, the table in Figure 2 it is not super complete given the amount of work that has been done in latent factorization (e.g. Learning Factorized Multimodal Representations, ICLR 2019) and purely discriminative multimodal fusion (i.e. point d on synergy) - There are a few typos and stylistic issues: 1. line 18: "Given the lack explicit labels availableâ -> âGiven the lack of explicit labels availableâ 2. line 19: âcan provided importantâ -> âcan provide importantâ 3. line 25: âbetween (Yildirim, 2014) themâ -> âbetween them (Yildirim, 2014)â 4. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite interesting. - The paper did a commendable job in attempting to perform experiments to justify the 4 properties they outlined in the introduction. - I can see future practitioners using the variational MoE layers for encoding multimodal data, especially when there is missing multimodal data. Weaknesses: - That being said, there are some important concerns especially regarding the utility of the model as compared to existing work. In particular, there are some statements in the model description where it would be nice to have some experimental results in order to convince the reader that this model compares favorably with existing work: 1. line 113: You set \alpha_m uniformly to be 1/M which implies that the contributions from all modalities are the same. However, works in multimodal fusion have shown that dynamically weighting the modalities is quite important because 1) modalities might contain noise or uncertain information, 2) different modalities contribute differently to the prediction (e.g. in a video when a speaker is not saying anything then their visual behaviors are more indicative than their speech or language behaviors). Recent works therefore study, for example, gated attentions (e.g. Gated-Attention Architectures for Task-Oriented Language Grounding, AAAI 2018 or Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning, ICMI 2017) to learn these weights. How does your model compare to this line of related work, and can your model be modified to take advantage of these fusion methods? 2. line 145-146: "We prefer the IWAE objective over the standard ELBO objective not just for the fact that it estimates a tighter bound, but also for the properties of the posterior when computing the multi-sample estimate." -> Do you have experimental results that back this up? How significant is the difference? 3. line 157-158: "needing M^2 passes over the respective decoders in total" -> Do you have experimental runtimes to show that this is not a significant overhead? The number of modalities is quite small (2 or 3), but when the decoders are large recurrent of deconvolutional layers then this could be costly. ****************************Post Rebuttal**************************** The author response addressed some of my concerns regarding novelty but I am still inclined to keep my score since I do not believe that the paper is substantially improving over (Wu and Goodmann, 2018) and (Tsai et al, 2019). The clarity of writing can be improved in some parts and I hope that the authors would make these changes. Regarding the quality of generation, it is definitely not close to SOTA language models such as GPT-2 but I would still give the authors credit since generation is not their main goal, but rather one of their 4 defined goals to measure the quality of multimodal representation learning. | - The paper did a commendable job in attempting to perform experiments to justify the 4 properties they outlined in the introduction. |
ACL_2017_614_review | ACL_2017 | As a reader of a ACL paper, I usually ask myself what important insight can I take away from the paper, and from a big picture point of view, what does the paper add to the fields of natural language processing and computational linguistics. How does the task of lexical substitutability in general and this paper in particular help either in improving an NLP system or provide insight about language? I can't find a good answer answer to either question after reading this paper.
As a practitioner who wants to improve natural language understanding system, I am more focused on the first question -- does the lexical substitutability task and the improved results compared to prior work presented here help any end application? Given the current state of high performing systems, any discrete clustering of words (or longer utterances) often break down when compared to continuous representations words (see all the papers that utilitize discrete lexical semantics to achieve a task versus words' distributed representations used as an input to the same task; e.g. machine translation, question answering, sentiment analysis, text classification and so forth). How do the authors motivate work on lexical substitutability given that discrete lexical semantic representations often don't work well? The introduction cites a few papers from several years back that are mostly set up in small data scenarios, and given that this word is based on English, I don't see why one would use this method for any task. I would be eager to see the authors' responses to this general question of mine.
As a minor point, to further motivate this, consider the substitutes presented in Table 1.
1. Tasha snatched it from him to rip away the paper.
2. Tasha snatched it from him to rip away the sheet.
To me, these two sentences have varying meanings -- what if he was holding on to a paper bag? In that scenario, can the word "paper" be substituted by "sheet"? At least, in my understanding, it cannot. Hence, there is so much subjectivity in this task that lexical substitutes can completely alter the semantics of the original sentence.
Minor point(s): - Citations in Section 3.1.4 are missing.
Addition: I have read the author response and I am sticking to my earlier evaluation of the paper. | - Citations in Section 3.1.4 are missing. Addition: I have read the author response and I am sticking to my earlier evaluation of the paper. |
NIPS_2019_346 | NIPS_2019 | weakness of the paper is a lack of theoretical results on the proposed methodology. Most of the benefits of the new model have been demonstrated by simulations. It would be very helpful if the authors could provide some theoretical insights on the relation between the model parameters and the tail dependence measures, and on the performance (consistency, efficiency etc) of the parameter estimators. Itemized comments: 1. The advantage of the new quantile function (3) compared to the existing function (2) seems unjustified. Compared with (2), (3) changes the multiplicative factors containing the up and down tail parameters into an additive term. While this makes the function less sensitive to the tail parameters when they are large, the paper does not present supporting data on why the reduced sensitivity is desired. 2. On Line 132, the authors concluded that v_{ij} determines mainly the down-tail dependence of y_i and y_j. For any 1 <= k < j, does v_{ik} also have similar interpretation as v_{ij}? For example, in Equation (4), by symmetry, v_{31} and v_{32} seems to have similar effect on the tail dependence between y_3 and y_2. 3. In Algorithm 1 on Page 5, \Psi (the set of \tau's in Equation (7)) should also be an input parameter of the algorithm. Moreover, since it determines which quantiles are estimated in the loss function, I'd expect it to have notable effect on the results. I think it would be helpful to discuss how \Psi was chosen in the experiments, and provide some guidance on its choice in general. 4. Equation (13) doesn't seem to have closed form solution in general. Some details about how it's solved in the experiments and on the computational complexity would be helpful. 5. In addition to the up and down tail dependences, how could we also model negative tail dependence, e.g., P(X < Q_X(t), Y > Q_Y(1 - t)) / t? This is the counterpart of negative correlations, and is also notably common in financial asset returns (e.g., when money flow from one asset class (e.g., stocks) another (e.g., bonds)). Minor comments: 1. In Figures 2 and 3, it may be clearer to see the fitting errors if we overlay the oracle and the fitted lines in the same plot. Update: Thanks to the authors for the feedback. I believe Items 2 and 5 above are well addressed. On the other hand, as pointed out by another reviewer as well, a lack of theoretical results still seems to be the main weakness of the paper, though I agree that due to the complexity of the learning procedure, an extensive theoretical analysis would be a luxury at this stage. | 1. In Figures 2 and 3, it may be clearer to see the fitting errors if we overlay the oracle and the fitted lines in the same plot. Update: Thanks to the authors for the feedback. I believe Items 2 and 5 above are well addressed. On the other hand, as pointed out by another reviewer as well, a lack of theoretical results still seems to be the main weakness of the paper, though I agree that due to the complexity of the learning procedure, an extensive theoretical analysis would be a luxury at this stage. |
ICLR_2023_2122 | ICLR_2023 | 1. This work has limited technical novelties. The multiscale attention strategy is to simply learn attention maps at different scales. Such technical novelties are not enough for publication in ICLR 2. The experimental results are not convinced. It seems that the authors only report the results of the developed method. It neglects the comparisons with state-of-the-art methods. 3. In the experiments, the authors do not provide any ablation study experiments. | 3. In the experiments, the authors do not provide any ablation study experiments. |
NIPS_2018_857 | NIPS_2018 | Weakness: The main idea of learning detectors with carefully generated chips is good and reasonable, but is implemented by a set of simple practical techniques. 3) Weakness: This is an extension of SNIP [24], and focuses mostly on speed-up. Thus its novelty is significantly limited. | 3) Weakness: This is an extension of SNIP [24], and focuses mostly on speed-up. Thus its novelty is significantly limited. |
4DoSULcfG6 | ICLR_2024 | Although the attack and the observation is interesting, I think the paper has the following weak points:
1. Time complexity. Clearly from Algorithm 1, to run the adaptive poisoning, the attacker has to run the training model much more times than the baseline algorithms, making the proposed algorithm less practical. However, the paper touches little about this topic, and does not provide any comparison in the experiment section. I think this information is crucial for the readers to better understand and appreciate the proposed algorithm.
2. Multiple challenge points. Usually in practice, the attacker needs to attack multiple challenge points instead of the only one. Although the paper briefly discusses this in the appendix, I think it is far from enough. Specifically, Algorithm 2 is just a simple generalization of Algorithm 1, neglecting many interesting and important problems due to more than one challenge points. For example, the problem of time complexity becomes even worse. Furthermore, due to the correlations of different challenge points, it is not clear how Algorithm 2 performs. Considering an extreme case when there are two challenge points opposing each other, it is possible after k_max iterations, the algorithm can not find meaningful k_i for both points simultaneously.
3. Clarity (minor points). The paper needs to improve the clarity. For example, many definitions are used without being defined, e.g., LIRA, challenge point, in+out model. It is better to provide those definitions in the preliminary to make the paper more self-contained. | 3. Clarity (minor points). The paper needs to improve the clarity. For example, many definitions are used without being defined, e.g., LIRA, challenge point, in+out model. It is better to provide those definitions in the preliminary to make the paper more self-contained. |
ARR_2022_288_review | ARR_2022 | - While this model is one of its kind in this area, the language model’s scope is too narrow to have a wide range of applications. Other similar BERT-based models (e.g. BioBERT, SciBERT) has a wider coverage. The authors should strengthen the claim of the importance of this model by discussing important problems in this area and how this language model will be essential in addressing a wide variety of problems in this domain.
- Although ConfliBERT is a novel language model, the concept, methodology, implementation follows the BERT model. While it can contribute to the area of political science and computational social science in general, it is not clear how this work contributes to the area of NLP.
- The paper lacks a sound discussion to explain certain observations. Although, the experimental results unanimously show that for problems related to this domain ConfliBERT is a better choice but it is not clear which version of ConfliBERT is better, SCR or Cont. On many occasions, we see that the Cont version (built on top of BERT) performs better than SCR. This raises two questions – under what conditions Cont is more likely to perform better and given that Cont is trained on top of BERT, what are the features of BERT that are important in this case. The paper should address the merits of BERT that are part of Cont but not of SCR. Additional experiments may help to show whether this is a data problem or a model problem.
The paper argues the need for a domain-specific language model for the area of political conflicts and violence. They validate this claim by showing how such a language model can improve downstream tasks in this area. The paper is clearly written and the claims are evaluated well using an extensive set of experiments.
The advantage and the utility of the proposed language model are clear but there remain a few gaps that are important to understand the full strength of this model. The authors should present a clear discussion on the advantages and disadvantages of using the domain-specific datasets used in the training. The authors discussed that their dataset has additional words related to terrorism that are not present in a more general-purpose corpus but are there any other types of words that enriched the training of ConfliBERT? We see that only 3 out of 9 problems are related to terrorism. Indeed, the SCR model performed better in all the tasks involving terrorism-related data. On the other hand for protest-related data, we see a mixed performance, where Cont doing better in 2 out of 3 cases. For the case of Cont, are there any disadvantages for not using a general-purpose model/data? What is the percentage of vocabulary that is present in the general dataset but not in the domain-specific data? If this percentage is large, will that create a roadblock in the wider applicability of this language model? On the other hand, having too many unrelated words in the general-purpose data can act as noise for more domain-dependent tasks? The paper will benefit from a more in-depth analysis and comparison of the datasets used against a more generic corpus.
Minor point: On page 6, section 5.2, lines 514, 515, it is not clear how the p-values were computed. What tests were performed to compare the two cases? | - Although ConfliBERT is a novel language model, the concept, methodology, implementation follows the BERT model. While it can contribute to the area of political science and computational social science in general, it is not clear how this work contributes to the area of NLP. |
bKCc3USOyv | ICLR_2025 | 1. The motivation for introducing quaternion operations by the authors doesn’t seem very natural. Is it simply because previous methods were insufficient in modeling attribute features, leading to the introduction of this mechanism? Why not consider many conventional techniques for incorporating attribute features?
2. Could the authors provide more insight into how quaternion encoding impacts the learning process and the quality of the resulting clusters?
3. Does the authors conduct experiments to demonstrate that the proposed method addresses the over-smoothing problem in GNNs?
4. What is the sensitivity of the model's performance to changes in the hyperparameters, especially the trade-off parameters α and β?
5. Can the authors discuss the scalability of their model, particularly for very large graphs with millions of nodes?
6. The authors' baseline methods lack more recent works, especially those from 2024. | 6. The authors' baseline methods lack more recent works, especially those from 2024. |
NIPS_2016_287 | NIPS_2016 | weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition". | 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 66